Here are some sections from my book, The Bayesian Choice:
Proposition 2.4.22 If a prior distribution density $\pi$ is strictly positive on $\Theta$, with finite Bayes risk and the risk
function, $R(\theta,\delta)$, is a continuous function of $\theta$ for
every $\delta$, the Bayes estimator $\delta^\pi$ is admissible.
Proposition 2.4.23 If the Bayes estimator associated with a prior $\pi$ is unique, it is admissible.
Notice that Proposition 2.4.22 contains the assumption that
the Bayes risk is finite. Otherwise, every estimator is, in a way, a
Bayes estimator. On the other hand, some
admissibility results can also be established for improper priors. This is why we
prefer to call generalized Bayes estimators the estimators associated
with an infinite Bayes risk, rather those corresponding to an improper
prior. This choice implies that the Bayes estimators of different
quantities associated with the same prior distribution can be simultaneously
regular Bayes estimators and generalised Bayes estimators, depending on what they
estimate. This also guarantees that regular Bayes estimators will always
be admissible, as shown by the following result.
Proposition 2.4.25 If a Bayes estimator, $\delta^\pi$,
associated with a (proper or improper) prior $\pi$ and a strictly
convex loss function, is such that the Bayes
risk, $$ r(\pi) = \int_{\Theta}
R(\theta,\delta^\pi) \pi(\theta) \,d\theta, $$ is finite, $\delta^\pi$
is admissible.
Example 2.4.28 Consider $x\sim \mathcal{N}_p(\theta,I_p)$.
If the parameter of interest is $||\theta||^2$ and the prior distribution
is the Lebesgue measure on $\mathbb{R}^p$, since
$\mathbb{E}^\pi[||\theta||^2|x] = \mathbb{E}[||y||^2]$, with $y\sim \mathcal{N}_p(x,I_p)$, the Bayes estimator under quadratic loss is
$$
\delta^\pi(x) = ||x||^2 +p.
$$
This generalised Bayes estimator is not admissible because it is dominated by $\delta_0(x) = ||x||^2-p$. Since the classical
risk is $$R(\theta,\delta^{\pi}) = {\mathrm{var}}(\|x\|^2)+4p^2$$ the Bayes risk is infinite.
However, if one modifies the quadratic loss into a weighted version, as in$$L(\theta,\delta)=\dfrac{(\delta-||\theta||^2)}{2||\theta||^2+1}$$ the Bayes risk is finite and the resulting Bayes estimator is admissible.