0

Let $Y_n$ be a sequence of random variable such that $$ \sqrt{n}(Y_n-\mu) \stackrel{d}{\to} \mathcal{N}(0, \sigma^2), $$ and thus we can say $Y_n$ is asymptotically normally distributed as $$ Y_n \stackrel{a}{\sim} \mathcal{N}\bigg(\mu, \frac{\sigma^2}{n}\bigg). $$ Now suppose want to approximate $E[f(Y_n)]$. By a Taylor series expansion we have $$ E[f(Y_n)] \approx f(E[Y_n]) + \frac{f''(E[Y_n])}{2}\text{Var}(Y_n). $$

It seems that this means that as $n\to \infty$ we can make use of the asymptotic distribution of $Y_n$, i.e., we are allowed to say: $$ E[f(Y_n)] \approx f(\mu) + \frac{f''(\mu)}{2}\frac{\sigma^2}{n}, \quad \quad \text{as} \ n \to \infty. $$

Does this expression hold? Do we need some assumptions before we can say it? One of the reasons I am not certain is that I read here that convergence in distribution just means the CDFs of the random variable is becoming closer to the limit CDF, but the actual values of the random variable may not be becoming closer together to the values of the limiting random variable. We need convergence in probability for the values to become closer.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Bertus101
  • 735
  • 1
  • 6
  • 1
    The symbol $\approx$ being vague, what do you mean by it [in a mathematical sense] ? – Xi'an Nov 06 '20 at 14:26
  • 2
    Even when a sequence of random variables converges in distribution, the corresponding sequence of expectations needn't converge at all. A standard example is the sequence $X_n=nY_n,$ $n=1,2,3,\ldots,$ where $Y_n$ has a Bernoulli$(p(n))$ distribution and $p(n)$ is chosen to converge to $0$ (so that $X_n$ converges to $0$ in distribution) in such a way that $E[X_n]=np(n)$ does not converge; *e.g.,* $p(n)=1/\sqrt{n}.$ – whuber Nov 06 '20 at 14:35
  • @Xi'an I am using the approximation stated in the 'First moment' section of this page https://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables – Bertus101 Nov 06 '20 at 14:54
  • 1
    @whuber In general they need not converge, but my situation is more specific. I have a sequence of random variables that are asymptotically normal. Maybe in my case the convergence in distribution can pass over to the convergence in expectation? – Bertus101 Nov 06 '20 at 15:09
  • 2
    It doesn't work that way: for instance, we could start with a sequence of random variables like yours whose expectations do converge and add my sequence to them. The new sequence is still asymptotically Normal but its expectation diverges. – whuber Nov 06 '20 at 15:47
  • Using Theorem 5.12 of the book Introduction to Statistical Limit Theory by Polansky, $\lim_{n\to \infty} E(|f(Y_n)|) = E(|f(Y)|)$ if and only if the sequence of random variables $\{f(Y_n)\}_{n=1}^\infty$ converges in probability to $f(Y)$ and $\{|f(Y_n)|\}_{n=1}^\infty$ is uniformly integrable (Definition 5.2). – Bertus101 Nov 10 '20 at 12:39
  • It seems I can use this theorem to say that the final expression in my post does hold as long as I require that $\{f(Y_n)\}_{n=1}^\infty$ is uniformly integrable. And possibly I can decompose the condition of $\{f(Y_n\}_{n=1}^\infty$ being uniformly integrable into a separate condition for the function $f$, and the sequence $\{Y_n\}_{n=1}^\infty$. – Bertus101 Nov 10 '20 at 12:40
  • 1
    That would work provided you use the absolute value of $f$, as required by that theorem. The Taylor series approach does not necessarily work: you need to adduce additional conditions to justify omitting the remainder term in the Taylor expansion and you need to assume that sufficiently high moments of $Y_n$ are bounded. – whuber Nov 17 '20 at 17:10
  • Convergence in distribution imply convergence in quadratic mean (and hence also in probability), under two regularity conditions, see here, https://stats.stackexchange.com/a/379971/28746 – Alecos Papadopoulos Nov 25 '20 at 18:06

0 Answers0