hi everyone
I have four regression models, and values of the constant (error) is something like 4.55***, 1.40, 4.49**. Assuming the same number of IVs and DVs are entered in the model.
I know the 1.40 is not significant, but what does that say about the error constant?
Also, with 4.55*** and 4.40**, what does the bigger or smaller value mean then?
thanks satchi
Is the regression model linear or non-linear. How many independent and dependent variables etc..I guess the best regression fit for an unknown parameter would be the one that minimizes the error between measured and predicted values of a dependent variable.
======= Date Modified 20 Apr 2010 16:29:38 =======
Ok the 'constant' just refers to the Y intercept - i.e. where the regression line starts off on the vertical line of the graph.
So that value represents the predicted value of the outcome variable when all other variables are 0.
I don't think I would worry about this in my own work - its part of the regression equation but the real interesting stuff is the significance of the other predictors (which is shown in the rows below the constant in the same table).
I hope that makes sense Satchi! 8-)
hi sneaks!
ok but then how do we test the R-squared significance from one model to another?
say we want to know if the effect change from model 1 to model 2 to model 3 is significant or not.
like when we run the regression model, we run model 1 and then model 2...and model 3.
but how do I test the models between models :-)
thanks
satchi
I think what you are referring to (correct me if I'm wrong!) is whether the change is significant when adding variables in blocks - i.e. a hierarchical regression.
So you want to know whether the change in the model is signficant from block 1 - block 2 and then block 2 - block 3??
Is that right??
If so then when you run the regression you can request 'change statistics' - you then get R Squared change and a F values etc. that tell you this.
ooh, i don't think it matters, I could be wrong! You would report it, because people need it to build the full regression equation but it is the r squared, adjusted r squared and the F values (and significance) that gives the overall model stats. Then the for the individual variables it is the B, beta, T values and significance you need to look at. So I would tend to ignore it!
Anyone know different??
PostgraduateForum Is a trading name of FindAUniversity Ltd
FindAUniversity Ltd, 77 Sidney St, Sheffield, S1 4RG, UK. Tel +44 (0) 114 268 4940 Fax: +44 (0) 114 268 5766
An active and supportive community.
Support and advice from your peers.
Your postgraduate questions answered.
Use your experience to help others.
Enter your email address below to get started with your forum account
Enter your username below to login to your account
An email has been sent to your email account along with instructions on how to reset your password. If you do not recieve your email, or have any futher problems accessing your account, then please contact our customer support.
or continue as guest
To ensure all features on our website work properly, your computer, tablet or mobile needs to accept cookies. Our cookies don’t store your personal information, but provide us with anonymous information about use of the website and help us recognise you so we can offer you services more relevant to you. For more information please read our privacy policy
Agree Agree