There's a formula called "Bayes' Theorem" that says that if you start out assigning probability P1 to a hypothesis H, and you see evidence E, then you should adjust your probability to:
P2 = P1*(probability of seeing E given that H is true)/(probability of seeing E).
So if something would normally be very unlikely to see, but probable if the hypothesis is true, then seeing it should significantly increase your confidence in the hypothesis. However, if something is likely to be seen regardless of whether the hypothesis is true, then it should not increase your confidence.
Statistical analysis can tell you what the probability of a particular study arriving at a result given the null hypothesis is, but that is not the same as the probability of it being seen. Unfortunately, the first number is erroneously treated as if it were the second.
This difference is the basis of the Monty Hall paradox: if you pick Door A and are shown that Door B has a goat, then the evidence "Door B has a goat" is equally likely regardless of whether Door A has a car, so it shouldn't cause you to switch. However, "I saw that Door B has a goat" is less likely if Door A has a car, because in that case Monty Hall has only a 50% chance of showing you Door B. Therefore, while knowing that Door B has a goat shouldn't cause you to switch, knowing that you know that Door B has a goat should.
That is, if Monty Hall always shows you Door B, regardless of whether it has a goat, then seeing it's a goat shouldn't make you switch. But if Monty Hall never shows you Door B when it has a car, always shows you Door B when Door C has a goat, and randomly chooses between Door B and Door C when they both have a goat, then seeing that Door B has a goat should make you switch.
Similarly, if someone shows you the result of a study, and you can confidently say "I would have seen this statistic regardless of what it had been", then you can take the naive probability calculations at face value. But if you see a statistic, and you realize that this statistic probably wouldn't have been mentioned if it weren't so impressive, then now you have to adjust for that bias.
So if you have a rigorous, predetermined procedure by which you become aware of results, and the probability of seeing a result is not dependent on the results of that study, then you don't have to worry about this distinction between "Probability of E" and "Probability of knowing E". But once the probabilities diverge, now you have an extra parameter that you have to estimate, and you likely will have only a vague idea of what this parameter should be, and it will be very tempting to simply ignore this issue.