I know that the log likelihood statistic, i.e. $X^2=-2(LL_1 - LL_2)$ where $LL_1$ and $LL_2$ are the maximum log likelihoods of two models, one nested in the other, is asymptotically distributed as a chi-squared variate with degrees of freedom equal to the difference in the number of free parameters between the two models as $n \rightarrow \infty$; $n$ is the sample size of the data.
My question: is the difference between the sampling distribution of $X^2$ and the theoretical chi-squared distribution always decreasing as n increases? If we compare the difference given two different sample sizes, is the approximation of a theoretical chi-square distribution always better at the larger sample size?
I ask because, in simulations of a rather complicated model, the sampling distribution of $X^2$ does come closer to the theoretical distribution as $n$ increases up to a point and then, as $n$ increases further, the fit between the two gets worse. I did not expect this behaviour and cannot find an explanation.