Relevant question: Ridge regression formulation as constrained versus penalized: How are they equivalent?
I've got an assignment to show that in rigid regression the coefficients vector $L_2 $ norm, $||w||_2$, is bounded by $O(\lambda ^ {-1})$. We haven't seen Lagrange Multipliers or KKT so I am assuming it's something a lot simpler (basic calculus and linear algebra). We can assume X,Y are constants.
I've shown this bound, which isn't enough:
Setting $w$ to $0$ we get from the Rigid Regression objective $RSS+\lambda ||w||^2$ = $\sum_{i=1}^n y(x_i)^2$.
So for any $w$ chosen, $\sum_{i=1}^{n}(f_{w}(x_{i})-y(x_{i}))^{2}+\lambda||w||^{2}\le c$, then > $\lambda||w||^2 \le c$ , then $||w|| = O(\frac{1}{\sqrt{\lambda}})$.