The multiple comparisons problem always exists. The questions to consider are: (a) what you want to protect yourself from, and (b) what a reasonable reader or reviewer would expect you to demonstrate to support your argument.
(a) Every time you reject a null hypothesis at p < 0.05, there is a 1 in 20 chance that the null hypothesis actually was true. The more tests you make, the more likely it is that you falsely reject at least one null hypothesis. In that context, you always need to be on guard against the multiple comparisons problem, to prevent yourself from being led astray.
(b) In most fields there are generally accepted (if unconscious) expectations about when correction for multiple comparisons must be performed. In fields like cell biology, the problem is alleviated by the distinction between an overall scientific hypothesis and the individual statistical hypotheses evaluated in particular experiments. The argument behind a scientific hypothesis in cell biology* is expected to be based on multiple lines of inquiry and types of experiments. So you might perform twenty different types of experiments with separate statistical tests in a study, but you would not be expected to correct across those types of experiments for multiple comparisons. It's the combination of lines of evidence that supports the scientific hypothesis.
My sense of external expectations in your specific examples is thus:
This is a classic multiple comparisons situation, on which there is pretty much universal agreement. If you compare each treatment against a control, there should be correction for that number of multiple comparisons. If you are comparing all treatments against each other, then you need to correct for all pairwise comparisons.
By the argument under (b) above, there would be no expectation for multiple comparison correction. (At the least, over 50 years I have never seen such a correction required by a reviewer nor reported in a cell/molecular biology or biochemistry study.) The two types of biochemical tests represent independent lines of evidence with respect to the underlying scientific hypothesis you are evaluating.
The close relationship between t-tests and equivalence tests makes it unlikely that a reader or reviewer would expect a correction for multiple comparisons. It is best to design and perform the study for non-inferiority, then do the superiority test afterward. The FDA guidance on non-inferiority (NI) trials says (page 31):
In general, when there is only one primary endpoint and one dose of
the test treatment, a trial that is planned to demonstrate
non-inferiority may also be used to test for superiority without
concern about inflating the Type I error rate. This sequential testing
procedure has the Type I error rates for tests of both non-inferiority
and superiority controlled at the 2.5% level. A study designed
primarily to show superiority, however, would yield credible evidence
of non-inferiority only if the study had the key features of a NI
study...
Multiple-comparison corrections can be needed if there are multiple endpoints or multiple drug doses in NI trials; see the FDA guidance for details.
*This might be different in some other areas of natural science or social sciences.