Your confusion stems from using the symbol $\lambda$ in two different ways.
In the image that you shared, the symbol $\lambda$ expresses a constraint of the sum of squares of the coefficients. The optimization program is
$$
\min_\beta \text{[Some loss function of $\beta$]}\\
\text{s.t.} \sum_i\beta_i^2\le \lambda
$$
Equivalently, you can represent a constraint on the sum of squares of the coefficients as an unconstrained optimization problem with a penalty.
$$
\min_\beta \text{[Some loss function of $\beta$]}+\gamma\| \beta\|_2^2
$$
The important detail is that the unconstrained problem doesn't use $\lambda$; it uses another symbol which I've chosen to be $\gamma$. (The loss could be mean-square error, or binomial cross-entropy, or any other expression you seek to minimize in $\beta$.)
When $\gamma$ is small, $\|\beta \|_2$ will be larger than when $\gamma$ is large. The quantity $\| \beta \|_2$ is shrinks when the penalty $\gamma$ is larger. This is the opposite relation compared to the constrained optimization using $\lambda$: when $\lambda$ is large, $\|\beta\|_2$ will be large, and when $\lambda$ is small, $\|\beta\|_2$ will be small.
The equivalence between these two expressions is established in Showing the Equivalence Between the $ {L}_{2} $ Norm Regularized Regression and $ {L}_{2} $ Norm Constrained Regression Using KKT