Solution
The calculation amounts to removing the linear term in the cumulant generating function (log characteristic function) of the distribution and simply replacing its argument $t$ by the factor $t\sqrt{n}/\sigma$ needed to standardize the sum, afterwards multiplying everything by $n$ to account for the $n$ iid variables comprising the sum.
Details
Let $\phi$ be the characteristic function of that common distribution with mean $\mu,$ standard deviation $\sigma,$ skewness $\gamma,$ and finite kurtosis $\kappa.$ Then because
$$\log \phi(t) = i \mu (t\sigma) - \frac{1}{2} (t\sigma)^2 - \frac{i\gamma}{6} (t\sigma)^3 + \frac{\kappa - 3}{24}(t\sigma)^4 + o(t^4),$$
the log characteristic function of the standardized sample mean $Z = \sqrt{n}(\bar X - \mu)/\sigma$ is
$$\begin{aligned}
\log \phi_Z(t) &= n \left(\log \phi\left(\frac{t\sqrt{n}}{\sigma}\right) - i\mu\left(\frac{t\sqrt{n}}{\sigma}\right)\right)\\
&= -\frac{1}{2}\left(\frac{t}{\sqrt{n}}\right)^2 - n\frac{i\gamma}{6} \left(\frac{t}{\sqrt{n}}\right)^3 + n\frac{\kappa - 3}{24}\left(\frac{t}{\sqrt{n}}\right)^4 + o(t^4).\\
\end{aligned}$$
Comparing like powers of $t$ shows
$$\gamma_Z=\gamma/\sqrt{n},\ \kappa_Z - 3 = (\kappa-3)/n.$$
The first equation is standard--it's usually taken as the definition of skewness and kurtosis--while the second is a direct (and simple) consequence of the independence of the $X_i$ in the sample together with the rules for how $\phi$ transforms under recentering and rescaling. The rest of this post provides all details for any readers who might be unacquainted with this use of characteristic functions.
Background
Characteristic functions
The characteristic function of a random variable $X$ is defined to be
$$\phi_X: \mathbb{R}\to \mathbb{C},\ \phi_X(t) = E\left[e^{itX}\right].$$
It always exists because $E\left[\big|e^{itX}\big|\right] \le E[1]=1$ demonstrates absolute convergence of the integrand $e^{itX}.$
Characteristic functions are useful for understanding (positive integral) moments $\mu^\prime(k) = E[X^k]$ because, when $E[X^{k}]$ exists and is finite, an application of Taylor's Theorem to the exponential shows
$$\begin{aligned}
\phi_X(t) &= E\left[(1 + itX + (itX)^2/2! + \cdots + (itX)^k/k! + o(t^{k+1})\right]\\
&= 1 + \frac{i\mu_X(1)^\prime}{1}t^1 - \frac{\mu_X(2)^\prime}{2}t^2 - \frac{\mu_X(3)^\prime}{6}t^3 + \cdots + \frac{i^k \mu_X(k)^\prime}{k!}t^k + o(t^{k}).
\end{aligned}$$
Thus, $\phi$ has a Maclaurin expansion $\phi_X(t) = a_0 + a_1/1\, t^1 + a_2/2!\, t^2 + \cdots + a_k/k!\, t^k$ (partial power series at $0$) and the moments of $X$ can be read directly from the coefficients $a_j:$ $\mu_X(j)^\prime = a_j(i)^j.$
Samples
A sample is defined to be a collection of $n$ independent random variables $X_i,$ $i=1,2,\ldots,n$ having a common distribution, whence all the $\phi_{X_i}$ are the same function $\phi.$ The sample mean is (also by definition)
$$\bar X = \left(X_1 + X_2 + \cdots + X_n\right)/n = X_1/n + X_2/n + \cdots + X_n/n.$$
Therefore, because the exponential of a sum of numbers is the product of their exponentials and expectations of products of independent random variables are the products of their expectations,
$$\phi_{\bar X}(t) = E\left[e^{it\bar X}\right] = E\left[e^{itX_1/n}\right]\, E\left[e^{itX_2/n}\right]\cdots E\left[e^{itX_1/n}\right] = \phi\left(\frac{t}{n}\right)^n.$$
Change of location
The central moments of $X,$ where $\mu = \mu_X^\prime(1)$ exists and is finite, are defined as
$$\mu_X(k) = E\left[(X-\mu)^k\right] = \mu^\prime_{X-\mu}(k).$$
Consequently they may be found from the characteristic function of $X-\mu,$ which can be related to that of $X$ via
$$\phi_{X-\mu}(t) = E\left[e^{it(X-\mu)}\right] = E\left[e^{-it\mu}\,e^{itX}\right] = e^{-it\mu}\phi_X(t).$$
Change of scale
The relationship between the characteristic functions of $X$ and $X/\sigma,$ for any positive number $\sigma,$ is obtained directly from the definitions as
$$\phi_{X/\sigma}(t) = E\left[e^{itX/\sigma}\right] = \phi_X\left(\frac{t}{\sigma}\right).$$
Simplification with logarithms
The power relation between $\phi_{\bar X}$ and $\phi$ suggests working with the logarithms of these functions, because that would make this a direct proportion,
$$\psi_{\bar X}(t) = \log \phi_{\bar X}(t) = \log\left[\phi\left(\frac{t}{n}\right)^n\right] = n \log \phi\left(\frac{t}{n}\right) = n\psi(t).$$
Since, for $|s|\lt 1$ the Taylor series for $\log(1+s) = s -s^2/2 + s^3/3 - \cdots$ converges absolutely, we easily obtain the series
$$\psi_{X-\mu}(t) = \log\left(1 + \left[ - \frac{\mu_X(2)}{2}t^2 - \frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4)}{24}t^4 + o(t^4)\right]\right)$$
by setting $s = -\mu(2)(X)t^2 + \cdots + o(t^4)$ and computing
$$\begin{aligned}s^2/2&= \frac{1}{2}(-\mu_X(2)/2)^2 t^2 + o(t^4);\\
s^k/k! &= o(t^4)\end{aligned}$$
for all $k \gt 2.$
This gives
$$\psi_{X-\mu}(t) = 1 + s - s^2/2 + o(t^4) = - \frac{\mu_X(2)}{2}t^2 - i\frac{\mu_X(3)}{6}t^3 + \frac{\mu_X(4) - 3\mu_X(2)^2}{24}t^4 + o(t^4).$$
Upon scaling this by $1/\sigma = \mu_X(2)^{-1/2}$ it simplifies to
$$\psi_{(X-\mu)/\sigma}(t) = \log \phi_{(X-\mu)/\sigma}(t) =-\frac{1}{2}t^2 - \frac{i\gamma_X}{6}t^3 + \frac{\kappa_X - 3}{24}t^4 + o(t^4)$$
where $\gamma_X$ is, by definition, the skewness of $X$ and $\kappa_X$ is its kurtosis.