2

Imagine that predictor A has a positive relationship with the dependent variable and that it also has a high correlation with predictor B.

When predictors A and B are entered into a regression model together suppose that predictor A now has a negative relationship with the dependent variable.

This seems like a symptom of multicollinearity. But, could it ever be the case that after controlling for predictor B, the unique variance in the dependent variable explained by predictor A had a negative relationship with that predictor? Can a predictor ever genuinely switch signs like in the example given? Are there ways to tell whether a sign flip is genuine or a symptom of multicollinearity?

(I imagine people will take issue my use of "genuine". What I mean is indicative of a genuine negative relationship between the uniquely explained variance and predictor A, and not a product of multicollinearity.)

Dave
  • 1,641
  • 2
  • 14
  • 27
  • Maybe checking for multicollinearity first (e.g., using [variance inflation factor](https://en.wikipedia.org/wiki/Variance_inflation_factor)) would be helpful. After that you will be more confident in your interpretation. – T.E.G. Aug 09 '17 at 02:35
  • The answer is no. When two regressors are orthogonal, the estimated coefficients in a least squares regression remain the same when one regressor is dropped. For a worked example, as well as an analysis of this situation that focuses on significance, please see https://stats.stackexchange.com/a/28493/919. – whuber Aug 09 '17 at 15:12

0 Answers0