It's not fully clear what the problem is. I read the post you linked, and indeed, assuming the difference in coefficients stays the same, a larger sample size (and thus smaller standard errors) means a larger Hausman test statistic value, and thus more likely to reject the null that both tests are consistent.
Why is this problematic at all? Similarly, if you run a linear regression, and say the estimate is the exact same values for $n=10,100,1000$, you're also more likely to reject the null that the estimate is $0$ for larger observations. More data means you're more able to detect differences, and thus typically more likely to reject a null.
It's tempting to think of significant being driven by the number of observations, but that's assuming that the values (in my example, the estimate, and in yours, the difference in coefficient estimates), stay the exact same as you increase number of observations. But the whole point of these tests is that with small number of observations, it's far more likely you observe quite different estimates each time you repeat the process, whereas with a large number of observations, this is less likely. This underlies much of statistical testing. Furthermore, insignificance in small samples doesn't mean there is no difference, but rather that you fail to reject the null: you lack the precision to observe a difference, but that doesn't mean it's not there.
If the difference between OLS and IV were truly $0$ under the data generating process, then no matter how much data you observe, you would still fail to reject the null hypothesis (sometimes you will reject, but at a low frequency.. this concept is called power). If there is a difference in the data generating process, no matter how small, with enough data, you will eventually be able to distinguish and pick up this effect. In your case, assuming the assumptions for a Hausman test hold, you are able to detect very small differences between IV and OLS, and you also reject the null that they are the same. The relative difference might be small, but you still reject the null. This suggests there truly is a difference, and if you were to reduce the sample size (btw, I can't think of a good reason to ever do this unless you're training data or something), you would just not be able to detect this difference. It doesn't mean it's not there--indeed, if you did that, just fudged your data until you failed to detect it, basically hiding it.
The effect is not coming from your number of observations, it's a property of the data generating process, but the ability to observe a difference does depend on the number of observations, and that's exactly what you are seeing.
Finally, failing to reject/rejecting a null is one interesting statement you can make given data, but it's not the only one. The magnitude of the effect also matters. This is why confidence intervals are important. Consider a regression coefficient: rejecting the null at $.05$ but having confidence interval of, say, [1,50000] could be interesting, but even though you reject the null, you know little about the effects. In contrast, for that same coefficient, failing to reject the null at $.05$ but having a confidence interval of, say, [-.0001,.0001]$ for the same effect could be far more interesting, as you can basically pinpoint the effect.