I'm attending a course in mathematical statistics and it seems the lecturer tacitly assumes that given estimators $T_1,T_2 : \Omega \to \Lambda$ of a parameter $g : \Theta \to \Lambda$, a loss function $L : \Lambda^2 \to [0\,..\infty)$ and its associated risk $R$ if $T_1$ are equivalent $T_2$ in terms of risk, then they are equal $P_\theta$ a.s. for all $\theta \in \Theta$.
By "equivalent" I mean $T_1 \cong T_2$ where:
$$T_1 \cong T_2 \Leftrightarrow R(T_1,\_) = R(T_2,\_) \Leftrightarrow \forall \theta \in \Theta : R(T_1,\theta) =R(T_2,\theta)$$
This does not seem to be true generally though. E.g. if we consider the square loss $L(x,y) = (x-y)^2$ and we let $\Theta = \{0\}$, $\Lambda = \mathbb R$, $P:=P_0$, $g(x) = x$, then surely we can find $\Omega$ with appropriate $\sigma$-algebra and $T_1,T_2$ such that $T_1$ and $T_2$ are not equal $P$-a.s. but that have the same $P$-distribution. Then: $$R(T_1,\theta)= \int f_\theta\,d(P\circ T^{-1}_1) = \int f_\theta\,d(P\circ T^{-1}_2) =R(T_2,\theta)$$ for all $\theta \in \Theta$, where $f_\theta(x) = (x-\theta)^2$.
My question is:
How contrived is this example? Are there usually good reasons why $T_1 \cong T_2$ iff $T_1$ and $T_2$ are equal $P_\theta$-a.s. $\theta\in \Theta$ or is this a phenomenon that usually persists?
If yes I also wonder about properties shared by estimators $T_1,T_2$ s.t. $T_1\cong T_2$.