I am writing a Gibbs sampler for data that is Log-Normal (LN) distributed, with unknown mean and variance. There is a wealth of information on inference for LN models when either the mean or variance (precision) are known, but I'm not finding much information on inferring both parameters. I have an idea of "reasonable" bounds on both parameters, but otherwise want to remain mostly uninformative.
This question on priors for LNs seems related, but they simply mention that one should use Jeffrey's prior for the variance. I'm looking for more specific information: which priors and how would one sample from them. Also, would we have to resort to a Metropolis step or can we make some assumptions in the priors such that we can get posteriors in closed form?