(I expand this answer to cover more completely a setup that occurs frequently, and also, to focus on the estimation of the population variance).
The setup is as follows: from $K$ independent samples that all come from a normal i.i.d. population $N(\mu, \sigma^2)$ and have varying sizes, $n_1+...+n_K = N$, we are not given the actual sample data but only
a) The sample sizes $n_i, i=1,...,k$
b) The sample mean of each sample, $m_i = \frac 1{n_i}\sum_{j=1}^{n_i}x_j$
c) The sample variance of each sample, $v_i = \frac 1{n_i}\sum_{j=1}^{n_i}(x_j-m_i)^2 = \frac 1{n_i}\sum_{j=1}^{n_i}x_j^2-m_i^2$
Note that we consider the maximum likelihood estimator of the population variance, i.e. we divide by $n_i$ and not by $n_i-1$.
We want to derive maximum likelihood estimators for the unknown population parameters, $\mu$, and $\sigma^2$, by using the information we are given.
A) MLE of population mean
Under the maintained hypothesis the sample means are normally distributed, and each sample mean $m_i \sim N(\mu, \frac {\sigma^2}{n_i})$. The samples are independent so the joint density/likelihood function of the sample means is
$$L(\mu, \sigma^2\mid \{m_1,...,m_K\})= \prod_{i=1}^K\frac {\sqrt {n_i}} {\sqrt{2\pi}\sigma}\exp\left\{-\frac 12 \frac {(m_i-\mu)^2}{\sigma^2/n_i}\right\}$$
and the log-likelihood is
$$\ln L(\mu, \sigma^2\mid \{m_1,...,m_K\})= c -K\ln\sigma -\frac 1{2\sigma^2} \sum_{i=1}^Kn_i(m_i-\mu)^2$$
Setting the first derivative of $\ln L$ w.r.t to $\mu$ equal to zero we have
$$\frac {\partial}{\partial \mu} \ln L = 0 \Rightarrow \frac 1{\sigma^2} \sum_{i=1}^Kn_i(m_i-\mu) =0 \Rightarrow \sum_{i=1}^Kn_im_i - \mu\sum_{i=1}^Kn_i =0$$
$$\Rightarrow \hat \mu_{ML} = \sum_{i=1}^K\frac {n_i}{N}m_i = \sum_{i=1}^K\frac {n_i}{N}\left(\frac 1{n_i}\sum_{j=1}^{n_i}x_j\right) = \frac 1N\sum_{j=1}^{N}x_j = \bar X_N \qquad [1]$$
So the MLE of the population mean, weighs each sample mean by the (relative) sample size from which it was derived, becoming a convex combination of the $K$ sample means, and so it ends up numerically equivalent to the full-sample mean we would obtain if we had the original data available and had pooled them in one sample.
Note: although we have $k$ estimates of the population variance available, they do not enter the likelihood, because the maintained assumption is that all samples come from the same population. If instead of $\sigma^2/n_i$ we had included in the likelihood $v_i/n_i$, we would have violated this assumption. A comment that refers to known variances (and provides the correct formula for this case), covers essentially the case where the various samples do not come from the same population (and more over, the variances are known and not estimated).
B) MLE of population variance
We could derive an MLE of the population variance using the above likelihood as
$$\hat \sigma ^2_{ML} = \frac 1K\sum_{i=1}^Kn_i(m_i-\hat \mu_{ML})^2 \qquad [2]$$
This is a biased estimator (due to the estimation error associated with $\hat \mu_{ML}$), and also, shouldn't we take into account the estimated variances derived from each sample, the $v_i$'s?
We know that
$$\frac {n_iv_i}{\sigma^2} \equiv z \sim \chi^2(n_i-1)$$
Then
$$v_i = \frac {\sigma^2}{n_i} z \sim \operatorname{Gamma}(k_i,\theta_i),\;\; k_i = \frac {n_i-1}{2},\;\; \theta_i = \frac {2\sigma^2}{n_i}$$
with the Gamma density given by
$$f_{v_i}(v_i) = \frac 1{\Gamma(k_i)\theta_i^{k_i}}v_i^{k_i-1}\exp\left\{-\frac {v_i}{\theta_i}\right\}$$
The $v_i$'s are independent random variables, so they form the following log-likelihood:
$$\ln L_v = c- \sum_{i=1}^Kk_i\ln \theta_i+\sum_{i=1}^K(k_i-1)\ln v_i-\sum_{i=1}^K\frac {v_i}{\theta_i}$$
Note that the shape parameters, $k_i$'s are known, and all $\theta$'s are functions of the same unknown parameter, $\sigma^2$. Setting the first derivative of the log-likelihood w.r.t. $\sigma^2$ equal to zero we get
$$\frac {\partial}{\partial \sigma^2} \ln L_v = 0 \Rightarrow -\sum_{i=1}^K\frac {k_i}{\theta_i}\frac {2}{n_i} + \sum_{i=1}^K\frac {v_i}{\theta_i^2}\frac {2}{n_i} =0$$
$$\Rightarrow \frac 1{\sigma^4}\sum_{i=1}^K\frac {n_i^2v_i}{4}\frac {2}{n_i} = \frac 1{\sigma^2}\sum_{i=1}^Kk_i\frac {n_i}{2}\frac {2}{n_i}$$
and simplifying and using also $k_i=(n_i-1)/2$ we obtain
$$\frac 1{2\sigma^2}\sum_{i=1}^Kn_iv_i = \frac 1{2}\sum_{i=1}^K(n_i-1) $$
$$\Rightarrow \hat \sigma^2_{ML}(v) = \sum_{i=1}^{K}\frac {n_i}{N-K} v_i \qquad [3]$$
Note that here, unlike the case of the MLE for the population mean, the sample variances are not combined into a convex combination - and this has the interesting consequence that by combining in this way the biased estimators $v_i$, the estimator $\hat \sigma^2_{ML}(v)$ becomes an unbiased estimator.
By using the scaling and summation properties of the Gamma distribution we have that
$$v_i \sim \operatorname{Gamma}\left(\frac {n_i-1}{2},\; \frac {2\sigma^2}{n_i}\right) \Rightarrow \frac {n_i}{N-K} v_i \sim \operatorname{Gamma}\left(\frac {n_i-1}{2},\; \frac {2\sigma^2}{N-K}\right)$$
$$\Rightarrow \sum_{i=1}^{K}\frac {n_i}{N-K} v_i \sim \operatorname{Gamma}\left(\frac {N-K}{2},\; \frac {2\sigma^2}{N-K}\right) $$
and so
$$E\left(\hat \sigma^2_{ML}(v)\right) = \frac {N-K}{2}\frac {2\sigma^2}{N-K} =\sigma^2$$
and
$$\operatorname{Var}\left(\hat \sigma^2_{ML}(v)\right) = \frac {N-K}{2}\left(\frac {2\sigma^2}{N-K}\right)^2 = \frac {2\sigma^4}{N-K}$$
Note that this unbiasedness property of $\hat \sigma^2_{ML}(v)$ would not obtain if we have used the unbiased formula (dividing the sum of squared differences in each sample by $n_i-1$ instead of $n_i$) to calculate each sample variance: we need to... "be loyal to ML spirit" from the beginning to be rewarded in the end!