Ridge regression penalizes "big" values of the coefficients $\beta$, and the degree of this penalization is proportional to $\lambda$.
On the one hand, you want to minimize the l2-norm of the residuals (this is the first part of the equation). The solution is the least-squares estimator $\hat \beta^{OLS}$.
On the other hand, you want to minimize the l2-norm of the $\beta s$. This would yield the 0 vector, as you might guess.
$\lambda$ comes as a compromise between the two. If $\lambda$ is zero, you might be overfitting (or not being able to compute a solution, if you are in a high dimensional setting). However, the bigger $\lambda$ is, the more importance you place on $\beta s$ being close to 0, as opposed to $\beta s$ providing a better fit. This means that the bigger the $\lambda$, the worse your fit. There must therefore be a $\lambda$ that avoids overfitting without leading to a bad fit. You'll learn techniques to choose such a $\lambda$, but there is no definite answer.
It is worth noting that Ridge regression will never "eliminate" parameters as your question suggests. If you want to do model selection, I would suggest looking into Lasso instead, for a start.