0

Thinking about this question, I came across Bartholemew et al (2011), which lists the following assumptions of the linear factor model, assuming $p$ observed variables:

iii) $e_{1}, e_{2}, ..., e_{p}$ are uncorrelated with each other

v) the $f$s are uncorrelated with the $e$s

They write on p181 that

Assumptions (iii) and (v) imply that the correlations among the $x$s are wholly explained by the factors.

However, as far as I can tell they don't elaborate on this. Why are these conditions sufficient for the factors to wholly explain the observed variables? Are those conditions necessary for the factors to wholly explain the observed variables?

Bartholomew, D. J., Steele, F., Galbraith, J., & Moustaki, I. (2008). Analysis of multivariate social science data. CRC press.

  • 1
    This site has cumulated a number of Q/A explaining common _FA model_ and _FA fundamental theorem_. As an example, I may forward to my answer [here](http://stats.stackexchange.com/a/94104/3277). There are further links to important threads about the topic. – ttnphns Jun 05 '16 at 21:02
  • In short: 1) Each variable is decomposed into common factor part and the unique variate, "error". 2) "Errors" from all the variables are uncorrelated with the common factors; 2) The "errors" are also uncorrelated with each other. Consequently, correlatedness between the variables could be fully explained only by the common factors. – ttnphns Jun 05 '16 at 22:10

0 Answers0