Lets first define the following objects: In a statistical model $M$ that is used to model $Y$ as a function of $X$, there are $p$ parameters denoted by vector $\theta$. These parameters are allowed to vary within the parameter space $\Theta \subset \mathbb{R^p}$. We are not interested in estimation of all these parameters, but only of a certain subset, say in $q \leq p$ of the parameters that we denote $\theta^0$ and that varies within the parameter space $\Theta^0 \subset \mathbb{R^q}$. In our model $M$ the variables $X$ and the parameters $\theta$ will now be mapped such as to explain $Y$. This mapping is defined by $M$ and the parameters.
Within this setting, identifiability says something about Observational Equivalence. In particular, if parameters $\theta^0$ are identifiable w.r.t. $M$ then it will hold that $\nexists \theta^1 \in \Theta^0: \theta^1 \neq \theta^0, M(\theta^0) = M(\theta^1)$. In words, there does not exist a different parameter vector $\theta^1$ that would induce the same data generating process, given our model specification $M$.
To make these concepts more conceivable, I give two examples.
Example 1: Define for $\theta = (a,b)$; $X\sim N(\mu, \sigma^2I_{n}); \varepsilon \sim N(0, \sigma_e^2 I_{n})$ the simple statistical model $M$:
\begin{align}
Y = a+Xb+\varepsilon
\end{align}
and suppose that $(a,b) \in \mathbb{R^2}$ (so $\Theta = \mathbb{R^2}$).
It is clear that whether $\theta^0 = (a,b)$ or $\theta^0 = a$, it will always hold that $\theta^0$ is identifiable: The process generating $Y$ from $X$ has a $1:1$ relationship with the parameters $a$ and $b$. Fixing $(a,b)$, it will not be possible to find a second tuple in $\mathbb{R}$ describing the same Data Generating Process.
Example 2: Define for $\theta = (a,b,c)$; $X\sim N(\mu, \sigma^2I_{n}); \varepsilon \sim N(0, \sigma_e^2 I_{n})$ the more tricky statistical model $M'$:
\begin{align}
Y = a+X(\frac{b}{c})+\varepsilon
\end{align}
and suppose that $(a,b) \in \mathbb{R^2}$ and $c \in \mathbb{R}\setminus\{0\}$ (so $\Theta = \mathbb{R^3}\setminus\{(l,m,0)| (l,m) \in \mathbb{R^2}\}$). While for $\theta^0$, this would be an identifiable statistical model, this does not hold if one includes another parameter (i.e., $b$ or $c$). Why? Because for any pair of $(b,c)$, there exist infinitely many other pairs in the set $B:=\{(x,y)|(x/y) = (b/c), (x,y)\in\mathbb{R}^2\}$. The obvious solution to the problem in this case would be to introduce a new parameter $d = b/c$ replacing the fraction to identify the model. However, one might be interested in $b$ and $c$ as separate parameters for theoretical reasons - the parameters could correspond to parameters of interest in an (economic) theory sense. (E.g., $b$ could be 'propensity to consume' and $c$ could be 'confidence', and you might want to estimate these two quantities from your regression model. Unfortunately, this would not be possible.)