1

I want to conduct an SVM model-regression (i.e., support vector regression), using a linear kernel function. Does it make sense to perform a cross-validation hyperparameter optimization when the kernel function is linear? If so, what should be the range of values for each hyperparameter in the search?

Thank you

Lior
  • 41
  • 4

2 Answers2

1

Yes, it makes full sense to use cross-validation to find optimal hyper-parameters values in the case of SVM with a linear kernel.

If anything, the choosing the regularisation parameter $C$ in the Lagrange formulation is analogous to choosing the ridge regularisation parameter in ridge regression. Therefore it is necessary for our training data to be scaled appropriately (usually to mean $0$ and st.dev. $1$). Regarding the actual choice of the $C$, it is generally common to use exponentially growing sequences of $C$; e.g. $C = 2^{-6},2^{-5},\ldots,2^{5},2^{6}$, etc. CV.SE has a nice thread on this matter if one wants to explore this further: "Which search range for determining SVM optimal C and gamma parameters?".

usεr11852
  • 33,608
  • 2
  • 75
  • 117
1

As usεr11852 says, cross validation does make sense for optimizing linear SVMs.

According to Hastie et al.: The Entire Regularization Path for the Support Vector Machine, Journal of Machine Learning Research 5 (2004) 1391–1415, inside the cross validation you don't need to compute one SVM per C value as it is possible to compute the complete path of SVMs with varying Cs together.
If you work in R, package svmpath provides this.

cbeleites unhappy with SX
  • 34,156
  • 3
  • 67
  • 133