I have been trying to look into the daunting problem within Bayesian Models: How is Real-World Domain Knowledge Converted into Bayesian Priors?
Logically speaking, it seems that Bayesian Priors can be used to solve the following types of problems:
Real world knowledge taking the form of Bayesian Priors can "nudge" the parameter estimates to more "realistic" values when the observed data is either of insufficient quality, too small, etc. Even if the data quality is deemed sufficient, correctly specified Bayesian Priors can still improve the quality of the parameter estimates.
I have read examples about Bayesian Priors acting as "latent variables" - for example, real world knowledge taking the form of priors can serve as a "probabilistic correction factor" that can reduce the bias and noise of estimates.
However, when it is unclear how to convert real world knowledge into priors, I have heard that Bayesian Priors are analogous to L1 Norm (Laplacian Prior) and L2 Norm (Gaussian Prior) Regularization techniques. More often than not, it seems like this appears to be the unintended use of Bayesian Methods: Bayesian Methods seem to serve more as general regularization techniques instead of enriching the quality of the estimates through carefully procured real world knowledge.
My Question: Can anyone recommend any sources (e.g. research papers) that demonstrate on applied statistical models, how real world knowledge was transformed into Bayesian Priors for the purpose of "enriching" parameter estimates? I have spent some time trying to find such references, but these do not tend to properly describe how exactly this real world knowledge is converted into Bayesian Priors.
Thanks!