I've been recently studying Bayesian inference with PyMC3. I understand the flexibility that comes with multiple possible options for initial distribution choices, yet I can't seem to understand why one would need the sampling part. I realize this is a very naive question, yet I cant seem to understand why does one not stop at the MAP part -- here the model parameter values are found.
Why the NUTS,GIBBS or any other sampling? Why is this useful? I see that whole distributions are obtained for individual parameters, and can be visualized. I assume this has to do with some sort of parameter validation, where one inspects the quality of the parameters obtained?
My current understanding is that MAP estimates are used as starting points for the sampling part. Once the sampling is done, how does one obtain "more correct" parameter values based on the MCMC?
I would really like to use this methodology for modeling tasks I am dealing with, yet I just don't see positive sides of having to choose one of n possible priors, where each could potentially give different results (?)
Thank you very much.