From an intuitive perspective the more we repeat an experiment (or as the sample size grows) the more we expect that the law of large numbers kicks in and we begin to converge to the sample mean. So deviations from this “expected” mean with large sample is better indicator that maybe the null does not hold as a pose to smaller samples. An excellent example of a coin toss is given in this answer. Essentially, 7/10 heads is more plausible then 700/1000 heads for a fair coin.
From a mathematical standpoint, as you already note holding the numerator constant the T-statistic will increase with sample size making the p-value smaller. This behavior is certainly what is generally expected. However, the above reasoning only works when the alternative hypothesis is true.
When the null hypothesis is true the distribution of the p-value is actually unchanging with sample size. So the p-value should be relatively unchanging. The reason for this is simply that,
$$\bar X \to \mu$$
in probability as $n\to\infty$. Then since the null hypothesis is that,
$$H_0:\bar X = \mu$$
the numerator will be expected to cancel out the effect of the denominator decreasing. Hence, setting the t-statistic to 0. Also, without any reference to the LLN, if the null is true we probably have that $\bar X \approx \mu$ which has the same effect regardless of sample size.
As a final note, I should emphasize that in practice a lot of outcomes are possible. Perhaps, the p-value even stays the same when you increase the sample size. This all depends on if the null is true, the effect size, and how much you are increasing the sample size. There is also a tendency for the p-value to be small if the effect observed is small and the sample size is very large. Similarly, if the effect is observed to be large but the sample size is small this can also lead to larger p-values. These "empirical phenomena" are part of why p-values can be very misleading.