3

I have two statistical models. Model 1 uses a GLM approach while model 2 uses a time series approach for fitting. I want to compare these two models.

Model 1 (i.e. GLM) has a better out of sample performance. Model 2 has a better BIC criteria. So based on out of sample performance, I should pick up model 1 and based on BIC I should pick up model 2 as the preferred model.

I should add that in this context and for the question I am trying to answer, Both the BIC and out of sample performance are important. The question is how to choose the best model in this case? Should I consider other criteria? Please let me know if you know any good reference with similar cases.

Stat
  • 7,078
  • 1
  • 24
  • 49
  • (1) I'm not sure you can necessarily compare a GLM and a time series model via BIC. (2) In any case which you used depends on what you want to do well at; even when BIC's are comparable, BIC is no guarantee of out of sample performance. *Why* do you want to optimize on one or the other? – Glen_b Oct 12 '13 at 00:09
  • Do you have any reference showing that we cannot compare GLM and time series using BIC? Because to me, it is possible since BIC just depends on estimated log likelihood and number of parameter and number of observations. These models can be used to price some products and you want your price to be unique. So at the end you need to pick up one. – Stat Oct 12 '13 at 00:24
  • 1
    Having seen the particular assumptions under which BIC was derived, I don't see how the comparisons implied by that derivation applies to your situation; the [onus would be yours](http://en.wikipedia.org/wiki/Philosophic_burden_of_proof#Holder_of_the_burden) to show that what you're doing makes sense. [In fact I have one reference that says you can't compare *likelihoods* across models with different error distributions, which if it were correct would wipe out a lot more than just BIC. I don't know that the claim of the reference is correct, though.] – Glen_b Oct 12 '13 at 01:02
  • Some related questions: [1](http://stats.stackexchange.com/questions/65455/can-you-test-likelihood-ratio-between-different-models) [2](http://stats.stackexchange.com/questions/43312/can-i-use-a-likelihood-ratio-test-when-the-error-distributions-differ); there are a number of others as well. As you see from [2], even if you can compare the likelihoods a problem comes up; this problem would apply to a comparison of BICs (the variance difficulty would translate to a shift-issue in difference of BIC's - if one BIC involves an unknown constant not present in the other, what does one do?) – Glen_b Oct 12 '13 at 01:05
  • 2
    @Glen_b I believe that this paper of Vuong (1989), http://www.jstor.org/discover/10.2307/1912557 provides a general framework for non-nested models. – Alecos Papadopoulos Oct 12 '13 at 14:26
  • Thanks Alecos; it's an important reference, and one I had forgotten about since it came out. I'm going to take a close look now. (My recollection was I didn't follow it in 1989, but I've learned quite a few things since then.) -- that may well give a way of doing what is needed here. – Glen_b Oct 12 '13 at 20:48
  • 4
    Possible duplicate of [Can AIC compare across different types of model?](https://stats.stackexchange.com/questions/4997/can-aic-compare-across-different-types-of-model) – kjetil b halvorsen Aug 31 '18 at 16:06
  • This is really a dup with many targets, when closed I will make a list of others so it is completely covered! – kjetil b halvorsen Aug 31 '18 at 16:08

0 Answers0