1

I have a few doubts about the Likelihood ratio test. I understand that we compute a p-value based on the ratio of likelihoods between two models.

I am wondering: Is the likelihood ratio test, a inherently non-parametric test? (zero assumptions about the underlying distribution).

How does it compare to the Mann-Whitney U test?

amoeba
  • 93,463
  • 28
  • 275
  • 317
  • 9
    Writing down the likelihood for a model is a consequence of assuming an underlying distribution. – Greenparker Mar 27 '16 at 14:40
  • I see, maybe my definition of non-parametric is messed (it works on all distributions - rather, it does not assume a distribution). Does this test work with any distribution then? –  Mar 27 '16 at 14:42
  • Yes, the method does work on any distribution (which does not mean it is a non-parametric method), but only for nested models. – Greenparker Mar 27 '16 at 14:53
  • @Greenparker: [Wilks' Theorem](https://en.wikipedia.org/wiki/Likelihood-ratio_test#Distribution:_Wilks.27s_theorem), which gives an asymptotic distribution for the likelihood-ratio test statistic, applies only to nested models: that isn't a restriction on the use of likelihood-ratio tests to compare non-nested models. – Scortchi - Reinstate Monica Mar 28 '16 at 14:38
  • 2
    It is a common misconception that "nonparametric" means "zero assumptions about the underlying distribution." That is not true; in fact, most non-parametric tests in use make some assumptions about the distribution, such as it's continuous or unimodal or homoscedastic, etc. – whuber Mar 28 '16 at 17:21

1 Answers1

2

Likelihood is a function of parameters given the "fixed" data

$$ L(\theta|\text{data}) = f(\text{data}|\theta) $$

Likelihood ratio is a ratio of two likelihoods. So to compute it you need two likelihoods, each of them being a parametric function. Yes, it can be used for likelihoods of nested models obtained from any distributions.

You ask if it assumes a distribution for the test statistic -- yes, to obtain $p$-value you use $\chi^2$ distribution. So there are parameters, distributions and assumptions all around.

Tim
  • 108,699
  • 20
  • 212
  • 390
  • (+1) To obtain p-values you might also use the exact distribution of the likelihood-ratio test statistic, or another asymptotic distribution when the assumptions required for twice the log likelihood ratio to have a chi-square distribution asymptotically are violated. – Scortchi - Reinstate Monica Mar 28 '16 at 14:41
  • I wonder about your conclusion that "each of [the likelihoods must] be a parametric function," because that seems unnecessary--and it might even obscure the very nature of this question. Consider, for instance, a nested model for bivariate data $(x_i,Y_i)$ in which (1) the random variables $Y_i-f(x_i)$, with non-random $x_i\gt 0$ and unknown function $f$, are standard Normal and (2) $(Y_i-f(x_i))/\lambda$ are iid standard Normal. Although both models are nonparametric, (1) is nested within (2) and leads to a LR test for the extra parameter $\lambda\gt 0$. – whuber Mar 28 '16 at 17:31
  • @whuber I am afraid that your comment is not clear for me. Likelihood functions are parametric by definition, your example involves parameter (?). – Tim Mar 28 '16 at 18:13
  • Yes: but *both* models are nonparametric! – whuber Mar 28 '16 at 18:25