I am currently working on convergence proof for a new method for non-parametric importance sampling, and I need some help...
My method uses an MCMC algorithm to generate a set of dependent $M$ samples $X_1 \dots X_M$ from a distribution $g$, and then reconstructs an approximation of this distribution $\hat{g}$ using a KDE algorithm. I want to show is that $\hat{g} \rightarrow g$ as $M \rightarrow \infty$.
Foruntately, I was able to find a proof that is similar to the one that I need in a paper about non-parametric importance sampling by Ping Zhan (see the appendix in this paper here for more details ). However, in this paper, it seems as if the author assumes that the samples used in the KDE algorithm $X_1 \dots X_M$ are i.i.d - which does not hold in my case given that I used an MCMC algorithm to generate them.
I am wondering if there is a way to tweak this existing convergence proof to account for the fact that we are using non-i.i.d samples within the KDE algorithm? Intuitively, I think that using correlated samples within a KDE algorithm will reduce its effectiveness - so maybe there is some kind of bounding factor? Otherwise, perhaps I exploit the fact that the samples come from an MCMC algorithm whose stationary distribution is $g$? If it helps, I can assume that the MCMC algorithm is just generic Metropolis-Hastings.
I'm open to any ideas!