A standard problem in statistics would be something like the following.
$$ H_0: \theta = \theta_0 $$
$$ H_a: \theta \ne \theta_0 $$
This is a two-sided test.
What if we did the following two tests?
$$ _1H_0: \theta = \theta_0 $$
$$ _1H_a: \theta > \theta_0 $$
$$ _2H_0: \theta = \theta_0 $$
$$ _2H_a: \theta < \theta_0 $$
The complaint that I see is that this raises $\alpha$, since there are multiple testing issues at play, and then if we consider multiple tests, this becomes equivalent to the two-sided test.
There is more than one way to adjust a p-value, and that approach, to me, implies a simple Bonferroni correction.
There are better adjustments than Bonferroni.
Could we keep $\alpha$ at the level we want (say $0.05$ or $0.01$) but increase power by using a better correction than Bonferroni?