The central limit theorem says the variable $Z$ converges in distribution to the standard normal.
$$Z=\frac{\hat{x}-E(X)}{\text{sd}(\hat{x})} \overset{D}{\rightarrow} N(0,1)$$
Rearrange this at a fixed value of $n$ (number of samples) to get, approximately:
$$\hat{x} \sim N(E(X),\text{Var}(\hat{x})) $$
where $\text{Var}(\hat{x})=\frac{\text{Var}(X)}{n}$ is the variance of the sample mean. You might interpret this as the distribution of $\hat{x}$ tends towards being normal with mean $E(X)$ and variance $\frac{\text{Var}(X)}{n}$.
For large values of $n$, the variance will tend to $0$. which means that the normal distribution of $\hat{x}$ will actually converge to a point mass distribution at $E(X)$. Convergence in distribution to a constant implies convergence in probability to that constant. So
$$\hat{x} \overset{P}{\rightarrow} E(X)$$
Of course this is just the weak law of large numbers. (The strong law stating that the convergence in almost sure.)