Referring to the link, my doubt is regarding the actual computation of variance of the forecast. The variance depicted here is $\sigma^2 [1+X^*(X'X)^{-1}(X^*)']$. As mentioned in the link added here, if $X$ is a row vector, then $X'X$ will be a singular matrix whose inverse blows up to very large values. My $X$ values are highly deterministic and are readily available without any error. Is there any way to make this inversion possible so that we can actually compute the variance?
Asked
Active
Viewed 64 times
1
-
1In this context it is commonly understood that $\left(X^\prime X\right)^{-1}$ is a *generalized inverse* (or, more accurately, a [constrained pseudoinverse](http://en.wikipedia.org/wiki/Bott%E2%80%93Duffin_inverse)); that is, $\left(X^\prime X\right)^{-1} X^{*\prime}$ refers to any solution $\hat{\beta^\prime}$ of the equation $X^\prime X \hat{\beta^\prime} = X^{*\prime}.$ The value of $X^{*}\hat{\beta^\prime}$ is well defined if and only if $X^{*\prime}$ lies within the span of the rows of $X$. – whuber May 28 '14 at 15:38
1 Answers
1
$\mathbf{X}$ a row vector? If you regression is $\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{e}$, i.e. $y_i=\beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_kx_k+e_i$, $\mathbf{X}$ can be a row vector only if you have a single observation. If $\mathbf{X}$ is a full rank matrix of dimension $N\times K$, $N>K$, then $\mathbf{X}'\mathbf{X}$ is not singular.
$\mathbf{X}^*$ can be, and often is, a row vector such that $$\sigma^2[1+\underset{1\times 1}{\underbrace{\underset{1\times K}{\mathbf{X}^*}\;\underset{K\times K}{\underbrace{(\underset{K\times N}{\mathbf{X}'}\underset{N\times K}{\mathbf{X}})}}^{-1}\;\underset{K\times 1}{(\mathbf{X}^*)'}}}]=\sigma^2[1+scalar]$$

Sergio
- 5,628
- 2
- 11
- 27
-
The point of the question is that when $X$ is *not* of full rank, then the inverse of $X^\prime X$ is not defined. – whuber May 28 '14 at 15:39
-
@whuber in the OP I read "if $X$ is a row vector, then $X′X$ will be a singular matrix", which is right because $X$ must be full _column_ rank. But if $X$ is a row vector (not the null vector) then it _is_ full rank: $rk(X)=1$ and could never be $>1$. – Sergio May 28 '14 at 15:56
-
But that last remark is irrelevant: your formulas are still undefined unless the *column* rank of $X$ is maximal. – whuber May 28 '14 at 15:59
-
@whuber please, don't be lazy and read again ;-) I've written: "if $X$ is a full rank matrix of dimension $N\times K$, $N>K$" etc. and that means $rk(X)=K$, i.e. my formula _is_ defined. Please. – Sergio May 28 '14 at 16:26
-
Sergio, the question concerns the case where $X$ is *not* a "full rank matrix", so my concern is that your answer misses the point. Whether I am correct or not, do **not** accuse me, or anyone else on this site, of being "lazy" in order to make your point or advance an argument: such writing is offensive and not accepted on this site. Before you do anything else on SE, please visit our [help] at http://stats.stackexchange.com/help/behavior and read it thoroughly. – whuber May 28 '14 at 16:54
-
@wuber the question concerns the case _where $X$ is a row vector_ (always full rank if not the null vector), _not_ where $X$ is not a "full rank matrix". So _you_ are missing the point. Why do _you_ accuse me? Why don't you give _your_ answer instead of accusing me? – Sergio May 28 '14 at 16:58
-
When $X$ is a row vector of dimension $N$, the issue before us concerns inverting $X^\prime X$, which is a rank-$1$ $N\times N$ matrix. The O.P. observes that the inverse does not exist in this case (at least when $N\gt 1$). Depending on what you really mean by "full rank matrix," either your answer does not address this situation or it incorrectly asserts that $X^\prime X$ is invertible. I accuse *you* of nothing but I am concerned about the relevance and correctness of the *reply* you have posted, that's all. – whuber May 28 '14 at 17:05
-
Not at all. The OP observes that the inverse does not exist when $X$ is $1\times K$ (why $N$?), and this is right. He misses that __in the linked post__ the row vector is $X^*$, not $X$: "Let $X^∗$ be a row vector containing the values of the predictors for the forecasts" etc. Please notice that $X^*(X'X)^{-1}(X^*)'$ _is not defined_ if the row vector is of dimension $N$, because $(X'X)^{-1}$ is of dimension $K\times K$. The OP is about a row vector of dimension $K$, not $N$. The OP is about the _variance of the forecast_. Can you see what you've missed? – Sergio May 28 '14 at 17:18
-
I think you and I are using $K$ and $N$ differently; I apologize for not being consistent with your usage, but I see no error or ambiguity in what I wrote. I also agree that one possible interpretation of the question is that the O.P. has confused $X^{*}$ with $X$. But if that is your interpretation, *then please edit your answer to make it clear that is the point you are addressing.* Once more--for the last time--I request that you cease from attacking me personally and stick with the issues. – whuber May 28 '14 at 17:57
-
I'll edit my answer if someone else agrees with you, because I can't conceive any other interpretation. BTW: if saying "you've missed the point" is attacking, that's what you've done again and again. Asking instead of asserting would be more polite. Wouldn't it? – Sergio May 28 '14 at 18:03
-
Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/14781/discussion-between-whuber-and-sergio). – whuber May 28 '14 at 18:04
-