For concreteness, imagine a one sample test of means (large sample, on something where the population mean and variance exists to make the argument a little simpler).
Let the difference between the true mean and the hypothesized sample mean be any nonzero $\delta$. Then the sampling distribution of the sample mean minus the hypothesized mean will itself have mean $\delta$ and a variance that shrinks proportionally to $1/n$.
As such, as $n$ becomes sufficiently large, the probability that the test statistic will be outside the rejection region goes down.
It might help, in fact, to think of it in terms of a test based off a confidence interval. The confidence interval for the population mean will shrink in width as $\frac{1}{\sqrt n}$. As $n$ becomes sufficiently large, the typical CI pulls closer and closer to the population mean (it's still a random variable, of course), but $\delta$ remains constant.
Eventually, the half-width of the confidence interval ("margin of error") is typically much smaller than $\delta$ - making the hypothesized mean 'far' - more and more half-widths of a typical CI - from the actual population mean (resulting in a rejection probability that approaches 1)
You can construct similar arguments for almost any hypothesis test of a point null, as long as you have a few basic conditions satisfied (if you don't have consistency, the argument will fail, for example).