I have read a little bit about GP for classification and there is something I do not understand.
The articles I have read put the output of the GP through a logit to produce a probability range but this breaks the nice features of gaussian which allow explicit posterior computation.
What I am wondering is why you need the logit?
Why can't you just do regression with target values being 1 and -1 and choosing the class closest to the output value?