I have the following regression:
$log(Y) = \alpha + \beta X + \epsilon$
with $E[\epsilon] = 0$ and $var(\epsilon) = \sigma^2$. There is no assumption on the distribution of the errors $\epsilon$. In other words, we cannot assume the errors are normal.
Now, I know that:
$\widehat{log(Y)} = \hat{\alpha} + \hat{\beta} X$.
I would like to find the expression of the predicted value in the original scale, $\hat{Y}$.
I tried second-order taylor expansion (around the mean) as follows:
$E[g(Z)] = g(\mu) + \frac{1}{2}g''(\mu)\sigma^2$
where $\mu$ and $\sigma^2$ are the mean and variance of $Z$.
In this case: let $Z = log(Y)$, then $Y = e^Z = g(Z)$. Then,
$\hat{Y} = E[Y|X] = E[e^Z | X] = g(\mu) + \frac{1}{2}g''(\mu)\sigma^2$
$= e^{\mu} + \frac{1}{2}e^{\mu}\sigma^2$
where $\mu = \hat{\alpha} + \hat{\beta} X$ and $\sigma^2$ is the variance of the residuals.
However, someone told me that regardless of the distribution of the error, the right result should be:
$\hat{Y} = e^{\mu + \sigma^2/2}$
Can anyone please help me figure this one out? Thank you!