9

Is it possible to perform an approximated fully Bayesian (1) selection of hyper-parameters (e.g. covariance scale) with the GPML code, instead of maximizing the marginal likelihood (2) ? I think using MCMC methods to solve the integrals involving hyper-parameters prior should lead to better results when dealing with overfitting. Up to my knowledge, the GPML framework doesn't include these computations, but perhaps there are goods other third party codes.


(1) Sec. 5.2, Ch. 5 in Gaussian Process for Machine Learning, Rasmussen & Williams, 2006

(2) Section "Regression" in the GPML documentation

Emile
  • 3,150
  • 2
  • 20
  • 17
  • Have you heard of INLA? may be what you're after. – probabilityislogic Sep 11 '13 at 10:12
  • This is not adding to your question, but have you managed to find useful work in this area of putting priors on length scales? Absolutely hate the idea of just numerically optimising the length scales of a GP – sachinruk Nov 15 '13 at 04:20
  • 1
    (+1) Great question. This is no MCMC, but there are third-party packages that allow partial marginalization of hyperparameters with GPML via Laplace approximation, if you're interested. See [this question](https://stats.stackexchange.com/questions/173216/marginalization-of-gp-regression-hyperparameters-with-laplace-approximation) and related answers. – lacerbi Apr 01 '16 at 20:38

1 Answers1

1

There is another package for machine learning using Gaussian processes called GPstuff which has it all in my opinion. You can use MCMC, integration on a grid, etc. to marginalise out your hyperparameters.

NB In the documentation they call hyperparameters merely parameters.

Mehrdad
  • 11
  • 2