For the classical simple linear regression model I have derived an hypothesis test for $H_0\colon \left\{\frac{y^*(x^*)-x^*}{\sigma}>1 \right\}$ where $x^*$ is a given value of the covariate $x$ and $y^*(x^*)=a + b x^*$ is the theoretical mean of the response $y$.
I use the test statistic $t=\frac{\hat{y}(x^*)-x^*}{\hat\sigma}$. Something very nice happens: at the boundary $\left\{\frac{y^*(x^*)-x^*}{\sigma}=1 \right\}$ of $H_0$, there is only one possible distribution of the random variable $t$. Hence for a given $\alpha \in ]0,1[$ I can find the critical value $C$ such that the type 1 error of the hypothesis test is exactly $\alpha$ when the rejection rule is $t<C$.
Very nice, but now I want to do the similar hypothesis test for Deming regression, and things are not so nice : the asymptotic distribution of $t$ has several possible distribution at the boundary of $H_0$. Hence I have derived an estimated critical value $\hat C$ and I use the rejection rule $t<\hat{C}$.
Simulations show that the type 1 error is approximately well controlled: for a desired type 1 error $\alpha$ the effective type 1 error is close to $\alpha$. But I wonder whether there are some pitfalls with my procedure ? Do you know other examples where one similarly uses an estimated critical value ? And are there some known pitfalls with these examples ? I think this is not rigorously correct to use a test statistic whose distribution is not uniquely determined at the boundary of $H_0$, but I don't know how to do otherwise.