The brute force Maximum A Posteriori estimation involves computation of posterior probability for all values of $\theta$ and then we choose the value of $\theta$ that maximizes $P(\theta | D)$. Right.
In this concept, why is it necessary to have uniform priors and noiseless data (as mentioned in one of the articles that I went through)?
I do not see any point in having such assumptions in place. For example, even if priors are non-uniform the posterior probability given by $p(\theta|D)$ can easily be obtained. Further, even if the data is noisy, we can always model this noise by some distribution (depending upon the source it comes from) and the parameters of this noise distribution could be appended to form extended $\theta$. Then, we can easily compute the posterior probability by Bayes rule.
Your take on this ?