Let's say I have two regression models, one with three variables and one with four. Each spits out an adjusted r^2, which I can compare directly.
Obviously, the model with the higher adjusted r^2 is the better fit, but is there way to test the difference between the two adjusted r^2 and get a p-value?
I know you can do Chow test to test the difference between slopes, but this is variance, so I don't think that's what I'm looking for.
Edit: One model does not simply contain a subset of variables from the other model, or else I would probably use stepwise regression.
In model 1, I have four variables: W, X, Y, and Z.
In model 2, I have three variables: W, X, and (Y+Z)/2.
The idea is that if Y and Z are conceptually similar, the model may make better predictions by grouping these two variables together prior to entering them into the model.