0

I have read a little bit about GP for classification and there is something I do not understand.

The articles I have read put the output of the GP through a logit to produce a probability range but this breaks the nice features of gaussian which allow explicit posterior computation.

What I am wondering is why you need the logit?

Why can't you just do regression with target values being 1 and -1 and choosing the class closest to the output value?

FourierFlux
  • 121
  • 3
  • 1
    Could you please cite the articles for better context? – msuzen Jan 18 '22 at 00:15
  • The reasons are exactly the same as for using logistic vs linear regression for classification. The predicted probabilities are more meaningful and easier to interpret, while with predictions on the whole real line for the -1 and +1 labels are on arbitrary, meaningless scale. I market this question as a duplicate of another one that discusses this for logistic regression. – Tim Jan 18 '22 at 07:38

0 Answers0