Suppose I have a probability distribution $p_\theta$ on $\Bbb R^n$ dependant on some parameters $\theta$. A natural problem is to evaluate the derivative of some expectation by the parameters: $$d_\theta\Bbb E_{p_\theta}[f]=d_\theta\int d^nx\, f(x)p_\theta(x)=\int d^nx\, f(x)\frac{d_\theta p_\theta(x)}{p_\theta(x)}p_\theta(x) =\Bbb E_{p_\theta}\left[f(x)\,d_\theta\ln(p_\theta(x)\right]$$ It is to be expected that the integrals are intractable, simply because integrals are generally hard. For example if $f$ is a complicated function this will not be easier than integrating $f$.
However if we are in a situation where we can sample from $p_\theta$ and evaluate the derivative of it, are there any barriers in getting an estimator for $d_\theta \Bbb E_{p_\theta}[f]$ by sampling the right hand side? What would be some standard literature about this ansatz?
I would expect yes, in part because this is then a way too simple solution for a problem that seems hard and in part because the logarithm and its derivative explodes for small arguments, so those places where $p_\theta$ is small and hard to access via sampling are given large weighting, necessitating a more samples to approximate the derivative than is possible.