I have seen many similar questions and I understand that $\lambda$ is some kind of a tuning parameter that decides how much we want to penalize the flexibility of our model. In other words $\lambda$ helps us decide how badly we want a perfect fit and how much bias are we willing to accept to get a nicely looking function, right?
But I'd like to understand the behavior of our model as we increase the tuning parameter. For $\lambda = 0$ all we care about is the fit. We get least squares fit. As the $\lambda$ increases, the model becomes less and less "spiky". It does not grow to a high values very fast only to go down again soon. It becomes more and more smooth.
And now finally, when lambda gets arbitrary large, $\lambda \rightarrow +\infty$, the penalty is very large and the coefficients will approach zero. Does that mean (from the graphical point of view) that as $\lambda$ grows it becomes smoother and smoother until it becomes "almost" flat and finally a horizontal line $y=0$? Or am I missing something?