4

Is anyone aware of a orthogonal multiple regression library that is implemented in say R, Scipy, Matlab, Octave, etc.? (Or even fortran/C...) If I'm not mistaken, it would not be difficult to write but just wanted to check.

Second question: my actual application is for a multivariate regression case where I have two matrices, $\mathbf{Y}$ and $\mathbf{X}$. If I am able to find a total least squares estimator for $\boldsymbol\beta$ and $\boldsymbol\beta'$, would the inverse of its transposed elements equal one another ($\hat{\beta}_{ji}^{-1} = \hat{\beta}'_{ij}$) if the two coefficient matrices are defined by $\mathbf{Y} = \hat{\boldsymbol{\beta}} \mathbf{X}$ and $\mathbf{X} = \hat{\boldsymbol{\beta}}' \mathbf{Y}$? It seems that this equality ($\hat{\beta}_{ji}^{-1} = \hat{\beta}'_{ij}$) does not necessarily hold as it does for the classic case, where the scalar values of $\beta^{-1}=\beta'$ when $\mathbf{Y}$ and $\mathbf{X}$ are simply vectors rather than matrices, but also wanted to check if this was a known fact.

Thanks.

Macro
  • 40,561
  • 8
  • 143
  • 148
hatmatrix
  • 739
  • 5
  • 15
  • 1
    Of possible interest: [Effect of switching response and explanatory variable in simple linear regression](http://stats.stackexchange.com/q/20553/930). – chl Jul 03 '12 at 19:25

1 Answers1

3

Perhaps I do not understand your question, but the result does not hold even for scalars. Suppose $$ Y=\beta X + \varepsilon, $$ where $Y$, $X$, and $\varepsilon$ are scalar random variables. Suppose $E[X \varepsilon ] = 0$. Then OLS of $Y$ on $X$ converges to $\beta$. But OLS of $X$ on $Y$ does not converge to $1/\beta$. Although it is true that $$ X = (1/\beta) Y + \eta $$ where $\eta = -\varepsilon/\beta$, notice that $E[Y \eta] \neq 0$. That is, $Y$ is correlated with $\eta$ (by definition of the first regression model), and hence OLS of $X$ on $Y$ is inconsistent; that is, it will not converge to $1/\beta$.

Aelmore
  • 506
  • 3
  • 5
  • 5
    The question is not about OLS, however: it is about the "total least squares estimator." This based on an errors-in-variables model. For scalars it is indeed true that the coefficients are reciprocals. This relationship does not extend to higher dimensions, though (which is obvious just from looking at the dimensions of the matrices involved). – whuber Jul 03 '12 at 19:09
  • "This relationship does not extend to higher dimensions" is what I was looking for; the dimensions of the matrices would be the same if transposed but actually it is a little more clear that they should not be related as simply as in the scalar case. – hatmatrix Sep 30 '12 at 12:19