2

I am learning MLE's at my inference class and this is a problem I came accross.

Consider two simple linear models.

$y_{1j}=\alpha _1+\beta_{1}x_{1j}+\epsilon_{1j}$ and
$y_{2j}=\alpha _2+\beta_{2}x_{2j}+\epsilon_{2j}$ , $ j=1,2,...,n>2$ where $ \epsilon_{ij}$~$N(0,\sigma^2)$ ,
$\epsilon$ are independent and identically distributed.

Obtain the maximum likelihood estimators of $\sigma^2$.

For this can I use only one equation of the two .
Can I find with only using $y_{1j}=\alpha _1+\beta_{1}x_{1j}+\epsilon_{1j}$ and would it be the same answer I would get had I used the other equation

sam_rox
  • 535
  • 1
  • 5
  • 14
  • 1
    Cross posted here: http://www.talkstats.com/showthread.php/56616-Maximum-likelihood-estimator-in-linear-model – Dason Jul 08 '14 at 02:11
  • Talkstats is outside the SE system, @Dason. In some sense it seems inconsiderate that it's posted in multiple places, & this fact isn't mentioned, but it isn't really against any rule that I know of. – gung - Reinstate Monica Jul 08 '14 at 03:10
  • 1
    We welcome HW questions, @sam_rox, but we treat them differently. Please tell us what you understand thus far, what you've tried & where you are stuck, & we'll try to provide hints to get you unstuck. To better understand the process, you should read the [wiki](http://stats.stackexchange.com/tags/self-study/info) for the `[self-study]` tag. – gung - Reinstate Monica Jul 08 '14 at 03:12
  • @gung : Even in that post the problem still remains unsolved. I know how to solve this if there was a single equation of the form $y_{ij}=\alpha _i+\epsilon_{ij}$. There Since $y_{ij}$ has a normal distribution I computed $E(y_{ij})$ and$ V(y_{ij})$ and fould likelihood function.But here there are two equations and although $y_{1j}$ and $y_{2j}$ are normally distributed how can I find expected and variance values.Should I consider only one of those given equations and take it's expected and variance – sam_rox Jul 08 '14 at 03:30
  • @gung: .Or as was in the post http://www.talkstats.com/showthread.php/56616-Maximum-likelihood-estimator-in-linear-model should I take $ E(y_{1j})=E(y_{2j}) $and construct a likelihood function as given there. – sam_rox Jul 08 '14 at 03:30
  • sam - thanks for including your thoughts, but they'd probably be better edited into the post – Glen_b Jul 08 '14 at 03:41
  • You need to use all the data. One way is to write the pair of equations as a single multiple regression via dummy variables. A second way you could approach it is simply to write the likelihood in terms of the errors. – Glen_b Jul 08 '14 at 03:47
  • @Glen_b : If i write the likelihoods in term of errors since error are independent I could multiply the two distributions for $\epsilon_{1j}$ and $\epsilon_{2j}$ to get the joint p.d.f right – sam_rox Jul 08 '14 at 04:15
  • @gung I never said it was formally against any rules - just pointing out that I noticed the cross post. If it got answered there that might influence if people want to answer it here or not. – Dason Jul 08 '14 at 04:26
  • sam: yes, but since those two $j$'s aren't connected, if you're doing it that way I'd write the likelihood for the two sets of $\epsilon$s and multiply those. – Glen_b Jul 08 '14 at 05:28
  • @Glen_b :that means $L(\epsilon_1)*(L\epsilon_2)$.Without having j for both equations I should have something like j for one equation and k for the other – clarkson Jul 08 '14 at 05:56
  • Yes, but those likelihoods run over all j and k – Glen_b Jul 09 '14 at 09:27

1 Answers1

2

Estimators do not exist "out there", and we are in the search of determining the "best" way to "find" them. As functions, they may have the same general structural form, but their results, the estimates, depend on the input that we will give them.

The OP states two different models. In order for these models to have any relation to estimation, they must be accompanied each by its own sample of observations, which is presumably the case. Then by applying maximum likelihood estimation, we will obtain one estimate, if we use only the one sample, another estimate if we use only the second sample, and yet a third estimate, if we use both samples. And all will be "maximum likelihood" estimates, each being the argmax of a different log-likelihood, since in each case the conditioning sample will be different.

Since the assumption is that all $\epsilon$'s are i.i.d. random variables, the two samples can be pooled, and the joint density / likelihood be written as the product of $n+n$ marginal normal densities. Then take logarithms etc.

Can we say anything about which of the three MLE's is "better" to use? Of course this requires defining a criterion against which the three MLE's will be judged (eg. Mean Squared Error-MSE). Instinctively, we would go with the one that uses both samples as one (larger sample).

Finally, an interesting and instructive option one could consider, is to see what would happen (in terms of estimator properties), if one obtained the two MLE's separately from the two samples, and consider some combination of them. Under the assumed normality, the finite sample distribution of the variance estimators is known. A related post to this option (but not in a regression context) is this one.

Alecos Papadopoulos
  • 52,923
  • 5
  • 131
  • 241