1

I understand why Gaussian Processes are considered "non-parametric", but why do most authors use non-parametric models for Bayesian Optimization?

What's the benefit of using such models as opposed to parametric approaches? (GLM, etc.)

Josh
  • 3,408
  • 4
  • 22
  • 46

2 Answers2

3

Parametric models assume the samples are from a specific distribution e.g. from a mixture of Gaussians where the number of Gaussian components is known a priori. This is restrictive since for most real-word problems we cannot know beforehand how complex the data is. For example, a nonparametric method should find the number of Gaussian components itself. As you see in my example, the nonparametric method still assumes something, that the data is from a mixture of Gaussians. But does not assume the number of components. Gaussian processes are nonparametric too. It uses every single training point to build a basis. So GP methods are flexible and powerful. They can learn complicated distributions (or decision boundaries in case of Gaussian process classification).

Seeda
  • 845
  • 7
  • 14
0

There's no any evidence why objective function should be linear, so nonparametrics like gaussian processes, random forests or neural networks are used as surrogates to approximate it as nonlinear function.

Sengiley
  • 326
  • 3
  • 10
  • you may say, let's use parametric models like n-degree polynomials or splines, but Gaussian Processes can be considered as generalization of them – Sengiley Jan 06 '17 at 20:24
  • Yes but could not one argue that there is no evidence, either, that an objective function would fit any *other* model? I.e. all models are equally likely to be a good fit in the absence of data, unless you know _something_ about the objective function? – Josh Jan 09 '17 at 19:28