0

I'm self-studying Introduction to Statistical Learning. Page 19 of the book states the following:

Consider a given estimate $\hat{f}$ and a set of predictors $X$, which yields the prediction $\hat{Y} = \hat{f}(X)$. Assume for a moment that both $\hat{f}$ and $X$ are fixed. Then, it is easy to show that

$$ E(Y-\hat{Y})^2 = E[f(X) + \epsilon - \hat{f}(X)]^2 = [f(X) = \hat{f}(X)]^2 + Var(\epsilon)$$

Question: How exactly is the step from $E[f(X) + \epsilon - \hat{f}(X)]^2$ to $[f(X) = \hat{f}(X)]^2 + Var(\epsilon)$ justified?

George
  • 587
  • 2
  • 8
  • Can you add the [self-study] tag and show us what you have done to approach the problem thus far? – Andy Apr 11 '16 at 13:24
  • Possible duplicate of [Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)](http://stats.stackexchange.com/questions/110190/proof-derivation-of-residual-sum-of-squares-based-on-introduction-to-statistica) This question is one of many examples of commonly asked questions. Please take the time to search the archives before posting. See also here: http://stats.stackexchange.com/questions/205044/derivation-of-equation-of-reducible-and-irreducible-error?lq=1 – Sycorax Apr 11 '16 at 13:28

0 Answers0