Suppose an estimator $\hat\theta_T$ is defined as the value of $\theta$ maximizing: $$\sum_{t=1}^T{l(y_t|\theta)}+\mu_T g(\theta),$$ where $l(y_t|\theta)$ is the log-likelihood of observation $t$, $\mu_T$ determines the strength of penalization (possibly a function of the sample size $T$), and $g(\theta)$ is a smooth penalty.
The Maximum a Posteriori (MAP) case corresponds to $\mu_T=1$ with $g$ the log-prior.
This is biased in finite samples in general (with bias potentially O(1)), and will be inconsistent if e.g. $\mu_T=O(T)$.
Are there techniques for bias correcting such an estimator, e.g. using information about the derivatives of $g$?