Yes, potentially...if they tested a LOT of diseases, the result was only barely significant, and they ONLY reported the significant effect. But (although I'm not a drug researcher) I don't think this is likely to be a serious concern in practice.
Here's the worst case scenario: Let's say a shady drug company came up with some random chemical compound that doesn't actually do anything and runs 100 different randomized trials, each looking at the effect of the "drug" on a different disease (bone cancer, throat cancer, COVID, measles, norovirus, etc.). Let's say FIVE of those 100 trials shows results that are barely significant at the 95% confidence level. Well, that's hardly surprising, since what "significant at the 95% level" MEANS is that IF the true effect were zero (which it is) then we would only expect to see an effect this large 5 times out of 100. So we got exactly the number of "false positives" that we would expect by chance. Obviously that can't be used as evidence that the drug does anything! Any statistician looking at this study would say "these 'significant' results don't mean anything." But if the company only reported the five significant results and suppressed the rest, then we wouldn't know about the 95 null results, and might think the results were legit.
This is a toy example however, and not likely to be an issue in the real drug development world (among other things, RCTs are expensive!). If you are only testing the drug on 2 or 3 diseases then multiple comparisons issues are less problematic, and if the result is significant at, say p<.001, then you can probably ignore them, since that's saying that you'd only expect a result that large due to chance in one out of a THOUSAND trials. Of course, replicating the result is another way to be sure, but that's also expensive.
The biggest issue with multiple comparisons/p hacking is transparency. If everyone knows that you did N different tests, then we can adjust our standard of significance to account for that. The big problem is that if you did a ton of tests, found a bunch of null results and didn't tell anyone! This is really what P hacking is - running a to of different tests and only reporting the significant ones.
And just because it's the best intuitive explanation of the multiple comparisons problem I've ever seen: https://xkcd.com/882/