I am studying about SVM now. Then I came across the problem.
The dual optimization problem is as follows:
\begin{align*} &\max_\alpha~~~~~ W(\alpha) = \sum_{i=1}^{n} \alpha_i -\frac{1}{2}\sum_{i,j=1}^{n} y_i y_j \alpha_i \alpha_j \langle x_i, x_j \rangle \\ &s.t.~~~~~\alpha_i \geq 0 ~~~~\textrm{for } i=1,\cdots,n \\ & ~~~~~\sum_{i=1}^{n} \alpha_i y_i = 0 \end{align*}
This problem has some constraints. So I cannot apply the gradient descent.
There are other methods of optimization like Newton or SMO. However, I could not understand it at all. So I want to use Gradient Descent.
Then, I came up with some idea. It is to use lower bound. It means this: \begin{align*} &\max_\alpha~~~~~ W(\alpha) = \sum_{i=1}^{n} \alpha_i -\frac{1}{2}\sum_{i,j=1}^{n} y_i y_j \alpha_i \alpha_j \langle x_i, x_j \rangle - \lambda\left\|\sum_{i=1}^{n} \alpha_i y_i \right\|_1 \\ &s.t.~~~~~\alpha_i \geq 0 ~~~~\textrm{for } i=1,\cdots,n \end{align*}
Can I apply GD to this problem? I could not make a progress any more than this.
Could anyone help me?