This is a homework question so I would appreciate hints. I believe I have the first part correct, but I fail to see how the second part is different.
Assume square error loss, $L(\theta ,a)=(\theta - a)^2$.
- When $X|\theta \sim N(\theta , \sigma^2)$, show $\delta (X) = c$, where $c$ is a constant, is an admissible estimator.
Since $\delta$ is not random, the risk is
$$ R(\theta,\delta)=E_{\theta}^{X}(\theta-\delta)^2=(\theta-c)^2 $$
which is zero for $\theta=c$. Suppose there exists an estimator $\eta$ such that $R(\theta,\eta)\leq R(\theta,\delta)$ for all $\theta$, and $R(\theta,\eta)<R(\theta,\delta)$ for some $\theta$. Then
$$ R(c,\eta)=E_{\theta}^{X}(c-\eta)^2=0 $$
and $\eta=c$ with probability 1. Then any estimator that dominates $\delta$ must be a constant a.s. and is then R-equivalent to $\delta$.
- When $X|\theta \sim U(0,\theta)$, show $\delta (X) = c$, where $c$ is a constant, is an inadmissible estimator.
I feel like the same proof as before holds. The only difference in the questions is that the support of a $U(0,\theta)$ distribution depends on $\theta$, but I don't see how that is important here.