I have dummy coded a categorical regression, and ran OLS to get parameter estimates, along the lines of:
$$ y= \left( \begin{array}{ccc} 1 & 0 &0\\ 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 0 \\ 1&0&1\\ 1&0&1\\ .&.&.\\ \end{array} \right) \beta+\epsilon $$
which gives me $\beta_0$, $\beta_1$ and $\beta_2$. I want to do a joint hypothesis test for $\beta_1=\beta_2=0$ -- i.e. an $F$ test, in the style of an ANOVA.
I read somewhere the following formulae for joint hypothesis testing:
$$ t=\frac{1}{n}\beta^T\Sigma\beta $$ $$ p=F_{cdf}(\frac{1}{t},n,\mathrm{dfe}) $$
where $\Sigma$ is the covariance of the parameter estimates, $n$ the number of hypotheses (2 in this case?), and dfe is perhaps the number of rows in $y$ minus 2.
I am not good at algebra, and wondered,
- is this right?
- If so, can I just examine $[ \beta_1, \beta_2 ]$ ignoring $\beta_0$?
- How can I obtain the covariance matrix of the parameter estimates $\Sigma$? I have googled "parameter covariance" and found this crossvalidated answer which looks v complex and I can't figure out how to do it with simple matrix ops. (My model actually has more columns than this).