In my Bayesian class, we are always required to specify vague (or noninformative) priors for bayesian modeling. I am quite confused about that.
If I understand correctly, the main advantage of Bayesian over frequentist is that Bayesian approach provides a formal and scientific way to incorporate prior useful information into models.
My questions are:
(1) Generally, when and why are vague or non-informative priors used in Bayesian modeling?
(2) If the prior knowledge is rather rare, then what is the point of using bayesian modeling?
For example, consider a linear mixed-effects model, $$\mathbf{y} = \mathbf{X} \boldsymbol \beta + \mathbf{Z} \boldsymbol \gamma + \boldsymbol \epsilon$$ where $\boldsymbol \beta$ is fixed effect vector, $\boldsymbol \gamma \ \sim \ N(\mathbf{0}, \sigma_{\gamma}^2 \mathbf{I})$, and $\boldsymbol \epsilon \ \sim \ N(\mathbf{0}, \sigma_{\epsilon}^2 \mathbf{I})$.
Then with Bayesian approach, we'd specify some vague or noninformative priors on parameters $\boldsymbol \beta, \ \sigma_{\gamma}^2, \ \sigma_{\epsilon}^2$.
I just don't understand if the priors are non-informative, why don't we just fit the linear mixed-effects model in the frequentist way.