3

Suppose an estimator $\hat\theta_T$ is defined as the value of $\theta$ maximizing: $$\sum_{t=1}^T{l(y_t|\theta)}+\mu_T g(\theta),$$ where $l(y_t|\theta)$ is the log-likelihood of observation $t$, $\mu_T$ determines the strength of penalization (possibly a function of the sample size $T$), and $g(\theta)$ is a smooth penalty.

The Maximum a Posteriori (MAP) case corresponds to $\mu_T=1$ with $g$ the log-prior.

This is biased in finite samples in general (with bias potentially O(1)), and will be inconsistent if e.g. $\mu_T=O(T)$.

Are there techniques for bias correcting such an estimator, e.g. using information about the derivatives of $g$?

cfp
  • 425
  • 2
  • 10
  • Of course. But MLE bias is O(1/T). This bias is potentially O(1). – cfp Jan 22 '20 at 14:17
  • If $\mu_T=O(T)$ this is no longer Bayesian. – Xi'an Jan 22 '20 at 14:17
  • I said that the MAP case was when $\mu_T=1$. – cfp Jan 22 '20 at 14:22
  • Needs to be general to be useful I'm afraid. – cfp Jan 25 '20 at 21:01
  • Bias correcting ML is standard. It just needs 3rd derivatives of the likelihood. See Wikipedia. But I'm happy if your answer results in an estimator that is only as biased as ML, i.e. the resulting bias is O(1/T). – cfp Jan 25 '20 at 21:30
  • The bootstrap would do it, removing the first order term in the expansion of bias in terms of $T$ - but I assume you are looking for something analytic? – jbowman Jan 26 '20 at 01:37
  • Yes, I should have stated that, sorry. The bootstrap is not really feasible in context. – cfp Jan 26 '20 at 07:15

0 Answers0