Consider the linear equation
$$
\mathbf X \beta = \mathbf y\,,
$$
and the SVD of $\mathbf X$,
$$
\mathbf X = \mathbf U \,\mathbf S \,\mathbf V^T,
$$
where $\mathbf S = \text{diag}(s_i)$ is the diagonal matrix of singular values.
Ordinary least squares determines the parameter vector $\beta$ as
$$
\beta_{OLS} = \mathbf V \,\mathbf S^{-1} \,\mathbf U^T
$$
However, this approach fails as soon there is one singular value which is zero (as then the inverse does not exists). Moreover, even if no $s_i$ is excatly zero, numerically small singular values can render the matrix ill-conditioned and lead to a solution which is highly susceptible to errors.
Ridge regression and PCA present two methods to avoid these problems. Ridge regression replaces $\mathbf S^{-1}$ in the above equation for $\beta$ by
\begin{align}
\mathbf S^{-1}_{\text{ridge}} &= \text{diag}\bigg(\frac{s_i}{s^2_i+\alpha}\bigg),\\
\beta_{\text{ridge}} &= \ \mathbf V \,\mathbf S_{\text{ridge}}^{-1} \,\mathbf U^T
\end{align}
PCA replaces $\mathbf S^{-1}$ by
\begin{align}
\mathbf S^{-1}_{\text{PCA}} &= \text{diag}\bigg(\frac{1}{s_i} \, \theta(s_i-\gamma)\bigg)\,,\\
\beta_{\text{PCA}} &= \ \mathbf V \,\mathbf S_{\text{PCA}}^{-1} \,\mathbf U^T
\end{align}
wehre $\theta$ is the step function, and $\gamma$ is the threshold parameter.
Both methods thus weaken the impact of subspaces corresponding to small values. PCA does that in a hard way, while the ridge is a smoother approach.
More abstractly, feel free to come up with your own regularization scheme
$$
\mathbf S^{-1}_{\text{myReg}} = \text{diag}\big(R(s_i)\big)\,,
$$
where $R(x)$ is a function that should approach zero for $x\rightarrow 0$ and $R(x)\rightarrow x^{-1}$ for $x$ large. But remember, there's no free lunch.