TL;DR -- your intuition is wrong, and the model is identified.
THE GORY DETAILS
This is a simultaneous equations econometrics model. Any and every SEM modeler needs to understand them before moving to enter any MPlus or lavaan or Stata syntax. (Well, any and every SEM modeler needs to know regression at the level of LSI, and multivariate at the level of Mardia-Kent-Bibby. I know I ask for a lot.) To read more on simultaneous equations, see Chapter 9 of black Wooldridge and Chapter 4 of Bollen's Bible.
Let me rewrite this as follows, with PCS
=$y_1$, MCS
=$y_2$, and for simplicity one regressor per each, $z_1$ = FIQ
, $z_2$ = combined effect of BDI
, STAI-I
, STAI-II
, and Social support
, and $z_3$ = Self-efficacy
. We then have a structural model
$$
\left\{
\begin{array}{ll}
y_1 & = \gamma_{21} y_2 + \delta_{11} z_1 + \delta_{31} z_3 + e_1 \\
y_2 & = \gamma_{12} y_1 + \delta_{12} z_1 + \delta_{22} z_2 + e_2
\end{array}
\right.
$$
This is to make things compatible with Wooldridge... which happens to be on a bookshelf closer than Bollen. I think the rationale for his notation is that $\gamma_{kl}$ means regression $y_k \rightarrow y_l$, etc.
We then have, in Wooldridge's matrix notation,
$$
\mathbf{y} \Gamma +
\mathbf{z} \Delta
+
\mathbf{e}
= 0
$$
where
$$
\mathbf{y} =
\begin{pmatrix}
y_1 , y_2
\end{pmatrix},
\Gamma =
\begin{pmatrix}
-1 & \gamma_{12} \\ \gamma_{21} & -1
\end{pmatrix},
\mathbf{z} =
\begin{pmatrix}
z_1 , z_2 , z_3
\end{pmatrix},
\Delta =
\begin{pmatrix}
\delta_{11} & \delta_{12} \\
0 & \delta_{22} \\
\delta_{31} & 0
\end{pmatrix}
$$
where the zeroes are fixing the structure of the omitted paths in the diagram. Let us also denote the number of endogenous variables by $G$ (here, $G=2$), the number of exogenous variables by $M$ (here, $M=3$), and define
$$
\Sigma = \mathbb{E} \mathbf{e}' \mathbf{e} \equiv {\rm Cov} \, \mathbf{e},
B = \begin{pmatrix} \Gamma \\ \Delta \end{pmatrix}
$$
Each structural form has a corresponding reduced form:
$$
\mathbf{y}
= \mathbf{z} (-\Delta \Gamma^{-1}) + \mathbf{e} (-\Gamma)^{-1}
\equiv \mathbf{z} \Pi + \mathbf{v}
$$
This is always identified -- heck, that's a linear regression in each component. But then the matrix $\Pi$ is not interpretable, and the issue is backing that matrix up to the coefficients of the structural model of interest, $\Gamma$ and $\Delta$.
If you transform the parameters post-multiplying by an arbitrary matrix $F$, then the system
$$
\mathbf{y} \Gamma F+
\mathbf{z} \Delta F
+
\mathbf{e} F
= 0
$$
has the identical reduced form. The key issue is to have structural constraints on $B$ and $\Sigma$. If, for any $G\times G$ matrix $F\neq I$, transforming the parameters by $F$ violates those constraints, then you have identification! If however you can find such an $F$ that the restrictions stay put (so that $BF$ has the same structure and relations between its entries as $B$ -- e.g., zeroes in the same places, and $F' \Sigma F$ has the same structure and relations between entries as $\Sigma$), then your model is not identified, as the actual information contained in the reduced form does not yield a unique solution for the structural form.
Here are the counting rules establishing identification.
- (equation-by-equation order condition with exclusion restrictions, Theorem 9.1 of Wooldridge 2010): a necessary condition for identifying any particular equation is that the number of excluded endogenous variables $z$ in a given equation must be at least as large as the number of included endogenous variables $y$.
In the first equation of the system, we have $y_1$ in the LHS, we have one included endogenous variable $y_2$, and we have one excluded exogenous variable $z_2$. Check. The second equation has one included endogenous variable $y_1$ and one excluded exogenous variable $z_3$. Check.
- (equation-by-equation rank condition, Theorem 2) Consider equation $k$ in the system, and the corresponding column $\beta_k$ of matrix $B$. Let the restrictions on the coefficients in that equation be expressed as $R_k \beta_k=0$; if there are $J_k$ restrictions, then $R_k$ is a $J_k \times (G+M)$ matrix. Then equation $k$ and the set of parameters $\beta_k$ is identified if and only if
$$
{\rm rank} \,
R_k B = G-1
$$
For the first equation, the restriction is $\delta_{21}=0$, $R_1=(0,0,0,1,0)$, and
$$
R_1 B = (0,0,0,1,0)
\begin{pmatrix}
-1 & \gamma_{12} \\
\gamma_{21} & -1 \\
\delta_{11} & \delta_{12} \\
0 & \delta_{22} \\
\delta_{31} & 0
\end{pmatrix}
= (0,\delta_{22})
$$
If $\delta_{22}\neq0$, then the rank is $1=G-1$. Hence this equation is then identified, unless you have a disastrous case of empirical underidentification with $\delta_{22}=0$ or close to zero in population.
For the second equation, the restriction is $\delta_{32}=0$, $R_2=(0,0,0,0,1)$, and
$$
R_2 B = (0,0,0,0,1)
\begin{pmatrix}
-1 & \gamma_{12} \\
\gamma_{21} & -1 \\
\delta_{11} & \delta_{12} \\
0 & \delta_{22} \\
\delta_{31} & 0
\end{pmatrix}
= (\delta_{31},0)
$$
If $\delta_{31}\neq0$, this equation is identified.
- (system order condition, Theorem 3) a necessary condition for the $k$-th equation to be identified is $J_k \ge G-1$, i.e., the number of restrictions is at least as large as the number of endogenous variables minus 1.
Both equations have one restriction, and that satisfies the system order condition.
The model would have been identified with either one of BDI
, STAI-I
or STAI-II
loading on MCS
. A free correlation of the errors turns out to be irrelevant.
Wooldridge also talks about using covariance restrictions to achieve identification in section 9.4.2. This is generally considered an obscure and unreliable practice, and there are no rules as specific as rank and order conditions. It is not applicable to this situation anyway as the covariance between error terms e1
and e2
is unrestricted, anyway.