Possible Duplicate:
Bayesian and frequentist reasoning in plain English
A very similar question was posed on stats.SE: Bayesian and frequentist reasoning in plain English, which provoked some interesting debate. However the debate is very focussed on how a statistician would behave, and doesn't really answer the question from a Machine Learning perspective.
The question is in 3 parts:
In simple terms, what is the difference between the Probabilistic (Bayesian) and the Optimisation (Frequentist) approach to machine learning?
What are they key advantages/disadvantages of each method?
As a practitioner, are there any guidelines to help me decide which method I should choose for a particular problem?
Some examples of competing methods in different areas of Machine Learning include:
- Classification. SVM vs Gaussian Process Classification
- Regression. Kernel Ridge Regression vs Gaussian Process Regression
- Dimensionality reduction. PCA vs Probabilistic PCA
- Topic modelling. Non-negative Matrix Factorisation vs Latent Dirichlet Allocation
- Multiple Kernel Learning. SimpleMKL vs VBpMKL (Variational Bayes probabilistic MKL)
- Multi-Task Learning. Regularised Multi-Task Learning vs Sparse Bayesian Multi-task Learning.
- Compressed sensing. $\ell_1$ minimisation vs Bayesian Compressed Sensing using Laplace Priors