A model is identifiable if a single set of parameters can be found that will yield the best fit.
A model is identifiable if a single set of parameters can be found that will yield the best fit.
For example, consider the classic ANOVA model:
$$
y_{ij} = \mu + \alpha_j + \varepsilon_i
$$
where $y_{ij}$ represents observed scores decomposed into a population mean, $\mu$, a mean shift, $\alpha_j$, associated with condition $j$, and each unit's individual divergence from their condition's mean, $\varepsilon_i$. When there are $J$ conditions, there are $J+1$ parameters to fit in this model. Without additional constraints, this model is unidentifiable; for instance if three conditions had means $3$, $4$, and $5$, they could be fit equally well with:
\begin{array}[llll]
\mu \mu = 1 \qquad &\alpha_1 = \; \; \ 2 \qquad &\alpha_2 = \; \; \ 3 \qquad &\alpha_3 = \; \; \ 4 \\
\mu = 4 \qquad &\alpha_1 = -1 \qquad &\alpha_2 = \; \; \ 0 \qquad &\alpha_3 = \; \; \ 1 \\
\mu = 9 \qquad &\alpha_1 = -6 \qquad &\alpha_2 = -5 \qquad &\alpha_3 = -4 \\
...
\end{array}
(Therefore, in practice the ANOVA is given additional constraints such as $\frac{1}{N}\sum_j n_j\alpha_j = 0$.)
Although issues with identifiability are trivial in the above example, they can be more subtle in other contexts. Identifiability concerns can arise in fitting structural equations models (sem), for example.