2

I want to test omitted variables and I perform 3 tests: $Wald$, Likelihood Ratio ($LR$) and Lagrange Multiplier ($LM$) test. My model is a linear model with a polynomial element ($x^2$).

I found a source saying that with a linear model, $Wald\geq LR\geq LM$ but I found $LM$ is slightly larger than $LR$. Is it an implication for a wrongly specified model?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
  • The ranking you refer to is an algebraic one that does not depend on any assumptions about, e.g., the specification of the model. Also, a quadratic term is just some other regressor, so I also do not see how that could be responsible. I would therefore suspect some difference in how the statistics are computed relative to the default (unless some other reason such as @Dave's question applies). – Christoph Hanck Dec 11 '19 at 14:42
  • @ChristophHanck The three are equivalent (at least asymptotically) for a linear regression, right? – Dave Dec 11 '19 at 14:47
  • @Dave, asymptotically, they indeed are (and that not only for linear models). But in finite samples (at least when building the LR from a normal likelihood) the ranking is as described by the OP...at least, I know this to be the case for linear models estimated by OLS when testing zero restrictions on a subvector of the coefficients - but maybe the result generalizes somewhat. – Christoph Hanck Dec 11 '19 at 15:41
  • @ChristophHanck Wait...they're asymptotically equivalent for GLMs? I thought they were different in general but happened to coincide for a linear model. – Dave Dec 11 '19 at 18:50
  • 1
    @Dave - in finite samples for a linear model they are only equal if the null hypothesis is exactly satisfied by the sample. They are indeed asymptotically equivalent for GLMs. – jbowman Dec 11 '19 at 19:07
  • @jbowman So when we do an F-test to check if a regression with an additional parameter is better than one without that parameter, which test is that? – Dave Dec 11 '19 at 19:31
  • @Dave - It's a Wald test (so is the standard t-test of a single parameter). – jbowman Dec 11 '19 at 19:32
  • @jbowman Is there a reason that became the popular way to do it instead of likelihood ratio testing? (user11924386 I know I'm taking over your question, but this is all good stuff to read.) – Dave Dec 11 '19 at 19:49
  • 1
    @Dave - LR testing requires fitting two models, one under the null and one under the alternative, but Wald and score (aka Rao) tests only require fitting one model. In the old days, computer runtime was a scarce resource, hence the move to Wald and score tests. Insofar as the difference between Wald and score tests are concerned, if a score test rejects the null, you don't have parameter estimates under the alternative, so you have to estimate the model again under the alternative to get them. The Wald test gives you both the test statistic and estimates under the alternative in one step. – jbowman Dec 11 '19 at 19:57
  • @Dave, here is some related discussion: https://stats.stackexchange.com/questions/276192/exact-equivalence-of-lr-and-wald-in-linear-regression-under-known-error-variance – Christoph Hanck Dec 12 '19 at 05:31

0 Answers0