1

Does it make sense to compare actual vs. forecast using correlation analysis / see how close $R^2$ is to 1?

Does it make sense to use a paired t-test to test actual vs. forecast to get accuracy of forecast?

I know RMSE is widely used, but I can use RMSE only if I have many models to compare. What if I just have one model/forecast version, and I need to prove to a wider audience that it is a decent/good forecast?

Correlation analysis and t-tests are used by someone where I work, and I am trying to say that's not right, but I don't have a good answer as to why they cannot use correlation and t-tests for forecast accuracy. Are they violating any assumptions?

Nick Stauner
  • 11,558
  • 5
  • 47
  • 105
user45147
  • 11
  • 1
  • Correlation, although it is commonly used for such purposes, is subtler than most people seem to realize, because--even when all relevant statistical assumptions seem to hold--its interpretation depends on sample size, true amount of correlation, actual ranges of the variables, nature of their variability, linearity of their relationship, and much more that tends not to be of direct interest or is overlooked entirely. See [Is R^2 Useful or Dangerous?](http://stats.stackexchange.com/questions/13314), for instance. – whuber May 07 '14 at 20:40
  • Correlation isn't necessarily much use for accuracy, because things can be correlated without being close. Consider if my forecast of y is y/100 -- perfectly correlated, but a terrible forecast. On the other hand t-tests really answer the wrong question. Why can't RMSE be used with one model? Its values are perfectly easy to interpret, being on the same scale as the data -- it's no trickier to comprehend than a standard deviation. – Glen_b May 07 '14 at 23:09

0 Answers0