I'm looking at the formulation for SVC as stated on sklearn's website (http://scikit-learn.org/stable/modules/svm.html#svc). The loss function here minimizes a "flatness term" and a (regularized) "error" term (I'm not sure if this is the terminology usually used but this is just how I think about it).
However, it seems to have the ability to do both l1 and l2 regularization (http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html). As I'm used to it (e.g. in the context of logistic regression), this is usually an error term on the coefficients. In this case, I'd assume its on the vector w. That being my assumption, I have two questions:
(1) why isn't there a term for this in the loss function as described (maybe its just a documentation issue)?
(2) why isn't there a second regularization coefficient to specify?
I'm assuming they didn't screw up, so my real question is: how does regularization work in the SVM/SVC context i.e. what's the loss function?