1

According to here, Now knowing the $a_i$ we can find the weights $w$ for the maximal margin separating hyperplane: \begin{align*} w = \sum_{i=1}^{l} a_i y_i x_i \end{align*}

I cannot understand what this says. I have trouble in how to choose $a_i$. I think we must conduct Newton-method, Pegasos, SMO, or stuff like that.

In addition, I want to implement hard-margin (linear-separable) SVM in Python. So I am seeking for the most simplest(easiest) optimization method.

I am very grateful to you if you make some answers taking this into account.

yoyo
  • 979
  • 1
  • 6
  • 9

1 Answers1

1

You're correct that finding the $a_i$ requires some form of iterative optimization. The thing that doesn't require optimization is converting from the dual variables $a_i$ to the primal variable $w$, which just follows the formula that you wrote.

Danica
  • 21,852
  • 1
  • 59
  • 115
  • It does make sence.Thank you. And as I wrote, I want the most simplest optimization method, then I was wondering if you could give me some ideas? – yoyo Aug 23 '18 at 00:18
  • If you actually want to use it for something, use an existing library, e.g. scikit-learn's bindings to libsvm. If you want to implement it yourself for learning purposes, stochastic gradient descent is probably simplest. – Danica Aug 23 '18 at 00:58
  • SGD is for unconstrain ploblem, isn't it? – yoyo Aug 23 '18 at 22:26
  • Ah, yeah, you'd have to implement some kind of workaround for the constraints (not sure if you can easily project?). Or you could implement SMO, I don't know how complicated it is (never done it). – Danica Aug 24 '18 at 02:42
  • OK. It is difficult for me to implement SMO. But, Thank you. – yoyo Aug 24 '18 at 03:53