You can get the answer for this kind of problem fairly easily by writing your mixture random variables as sums of random variables, each multiplied by indicators for the outcomes of a categorical random variable. To do this, let $S_1,...,S_n \sim \text{IID Categorical}(\boldsymbol{\lambda})$ and write your mixture random variables as:
$$X_i = \sum_{s=1}^m G_{i,s} \cdot \mathbb{I}(S_i = s)
\quad \quad \quad G_{i,s} \sim g_s.$$
Now, taking a sum of your mixture random variables gives:
$$\sum_{i=1}^n X_i
= \sum_{i=1}^n \sum_{s=1}^m G_{i,s} \cdot \mathbb{I}(S_i = s)
= \sum_{s=1}^m \Bigg( \sum_{i=1}^n G_{i,s} \cdot \mathbb{I}(S_i = s) \Bigg).$$
Each term in the brackets is a sum of $N_s \equiv \sum_{i=1}^n \mathbb{I}(S_i = s)$ IID random variables with density $g_s$. Noting that $\mathbf{N} \equiv (N_1,...,N_m) \sim \text{Mu}(n, \boldsymbol{\lambda})$ and letting $g_s^n$ denote the $n$-fold convolution of the density $g_s$, we can then write:
$$\sum_{i=1}^n X_i
= \sum_{s=1}^m H_s(N_s)
\quad \quad \quad H_s(n) \sim g_s^{n}.$$
So, we can see that the sum of the mixture random variables is itself a mixture of $m$ random variables, where each random variable $H_s$ is drawn from the $N_s$-fold convolution of $g_s$. In your case you are using the normal distribution, so you have $H_s(n) \sim \text{N}(n \mu_s, n \sigma_s^2)$. Since all weighted sums of normal random variables are normal random variances, all you need to do is to find the moments of the resulting sum variable. Using the law of iterated expectation and variance you have:
$$\begin{align}
\mathbb{E} \Bigg( \sum_{i=1}^n X_i \Bigg)
&= \mathbb{E} \Bigg( \mathbb{E} \Bigg( \sum_{i=1}^n X_i \Bigg| \mathbf{N} \Bigg) \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \mathbb{E} \Bigg( \sum_{s=1}^m H_s(N_s) \Bigg| \mathbf{N} \Bigg) \Bigg) \\[6pt]
&= \mathbb{E} \Bigg( \sum_{s=1}^m N_s \mu_s \Bigg) \\[6pt]
&= \sum_{s=1}^m \mathbb{E} ( N_s ) \mu_s \\[6pt]
&= \sum_{s=1}^m n \lambda_s \mu_s, \\[6pt]
&= n \sum_{s=1}^m \lambda_s \mu_s, \\[6pt]
\mathbb{V} \Bigg( \sum_{i=1}^n X_i \Bigg)
&= \mathbb{V} \Bigg( \mathbb{E} \Bigg( \sum_{i=1}^n X_i \Bigg| \mathbf{N} \Bigg) \Bigg) + \mathbb{E} \Bigg( \mathbb{V} \Bigg( \sum_{i=1}^n X_i \Bigg| \mathbf{N} \Bigg) \Bigg) \\[6pt]
&= \mathbb{V} \Bigg( \mathbb{E} \Bigg( \sum_{s=1}^m H_s(N_s) \Bigg| \mathbf{N} \Bigg) \Bigg) + \mathbb{E} \Bigg( \mathbb{V} \Bigg( \sum_{s=1}^m H_s(N_s) \Bigg| \mathbf{N} \Bigg) \Bigg) \\[6pt]
&= \mathbb{V} \Bigg( \sum_{s=1}^m N_s \mu_s \Bigg) + \mathbb{E} \Bigg( \sum_{s=1}^m N_s \sigma_s^2 \Bigg) \\[6pt]
&= \sum_{s=1}^m \mathbb{V}(N_s) \mu_s^2 + \sum_{s=1}^m \mathbb{E}(N_s) \sigma_s^2 \\[6pt]
&= \sum_{s=1}^m n \lambda_s (1-\lambda_s) \mu_s^2 + \sum_{s=1}^m n \lambda_s \sigma_s^2 \\[6pt]
&= n \sum_{s=1}^m \lambda_s [(1-\lambda_s)\mu_s^2 + \sigma_s^2]. \\[6pt]
\end{align}$$
Thus, you have the final result:
$$\sum_{i=1}^n X_i \sim \text{N} \Bigg( n \sum_{s=1}^m \lambda_s \mu_s, n \sum_{s=1}^m \lambda_s [(1-\lambda_s)\mu_s^2 + \sigma_s^2] \Bigg).$$