0

I want to estimate the variables $a$ and $b$ ($\theta = \left[ {\begin{array}{*{20}{c}} a\\ b \end{array}} \right]$) in the nonlinear model:

$$y\left( t \right) = au\left( t \right) + b\exp (u(t)) + e(t)$$

where $e\left( t \right) \sim N\left( {0,1} \right)$ and $u(t)$ are random independent samples drawn from a uniform distribution in the interval $\left[ {0,1} \right]$. I have $N$ available measurements $1 \cdots N$ and I want to get an estimate through least squares.

attempt: if I accumulate the measurements in the matrices

$$Y = \left[ {\begin{array}{*{20}{c}} {y\left( 1 \right)}\\ \vdots \\ {y\left( 2 \right)} \end{array}} \right],X = \left[ {\begin{array}{*{20}{c}} {u\left( 1 \right)}&{\exp (u\left( 1 \right))}\\ \vdots & \vdots \\ {u\left( N \right)}&{\exp (u\left( 1 \right))} \end{array}} \right],\bar E = Y = \left[ {\begin{array}{*{20}{c}} {e\left( 1 \right)}\\ \vdots \\ {e\left( N \right)} \end{array}} \right]$$ Then the least squares is ${{\hat \theta }_N} = {\left( {{X^T}X} \right)^{ - 1}}{X^T}Y$

taking the expected value of estimator gives $E\left( {{{\hat \theta }_N}} \right) = E\left( {{{\left( {{X^T}X} \right)}^{ - 1}}{X^T}Y} \right) = E\left( {{{\left( {{X^T}X} \right)}^{ - 1}}{X^T}X\theta + \bar E} \right) = \theta $

hence it is unbiased. As for covariance of error I get

$$E\left( {\left( {{{\hat \theta }_N} - \theta } \right){{\left( {{{\hat \theta }_N} - \theta } \right)}^T}} \right) = {\left( {{X^T}X} \right)^{ - 1}}$$

Could I simplify the result for covariance further and write it in terms of $N$ and hence to see what I get as $N \to \infty$ to see if it is consistent? if not, how could I conclude it is consistent or not?

Thanks in advance:)

  • This model is *linear* in the parameters, which makes it a perfectly standard least squares situation. – whuber Jan 12 '22 at 21:02
  • I see. but could I get a closed form solution of the covariance in terms of $N$? and how to do it? in the case of input being normal it is easy, as $(X^TX)^{-1}$ is the inverse of correlation matrix – Alex Mathy Jan 12 '22 at 21:19
  • $X^\prime X$ depends on $N$ already: write it out in a small example. See https://stats.stackexchange.com/a/108862/919 for details (in which the factor of $1/N$ appears explicitly). – whuber Jan 12 '22 at 22:37
  • @whuber what do you mean write it out in a small example? – Alex Mathy Jan 12 '22 at 22:46
  • Consider, say, three data points. Write out the $3\times 2$ matrix $X$ and compute $X^\prime X$ and its inverse, both of which are $2\times 2$ matrices. – whuber Jan 12 '22 at 22:58
  • I understand you, however it would be great if for instance the matrix $X'X$ could be approximated via a covariance matrix or sth, like in the case of white noise inputs. – Alex Mathy Jan 12 '22 at 23:01
  • You will need to explain that further. What would one use to approximate it, based on what assumptions, to what accuracy? Are you trying to find its distribution under the assumptions you have made about the $u(t)$? – whuber Jan 12 '22 at 23:50
  • yes, exactly, that was the whole point from my question. In the case you have lets say $u(t)$ and $u(t-1)$ instead of the exponential, you could approximate $X'X$ via $N\left[ {\begin{array}{*{20}{c}} {{R_u}\left( 0 \right)}&{{R_u}\left( 1 \right)}\\ {{R_u}\left( 1 \right)}&{{R_u}\left( 0 \right)} \end{array}} \right]$ and hence the covariance of error is explicit in the variable $N$. I wanted something similar here – Alex Mathy Jan 13 '22 at 00:01
  • I know that this estimator is gaussian with zero mean and covariance matrix ${\left( {{X^T}X} \right)^{ - 1}}$. – Alex Mathy Jan 13 '22 at 00:02
  • Yes, you will need approximations: describing a fully accurate distribution is truly messy even with just two observations! A form of the multivariate Central Limit Theorem ought to apply. You should edit your post a little to make your intentions clearer to readers. – whuber Jan 13 '22 at 00:03

0 Answers0