In the case of one-sided tests, if you're choosing which direction to test in a test for based on looking at the direction the sample means are in, under the null hypothesis you're only counting half the type-I errors, since half of them will occur in the other direction.
In more detail: when the null hypothesis is true, the population means are identical. The mean of the second sample could be either side of the mean of the first sample. But when you're restricting it to a one sided test, you only consider cases which are less likely in the same direction as the sample, when values the other side of the mean can happen just as easily.
This makes your results look more significant than they should.
As a result, people may suspect you of significance-hunting unless you have an obvious reason to use a one-sided test. Typical advice would be in general, avoid one-sided tests, unless there's a very good a priori reason.
It doesn't just apply to one-sided hypothesis tests -- generally speaking any hypothesis test doesn't have the desired properties if it's formed after you see the data.
So the last part of the advice is misleading: where it says "if they are not ..." --- if there is any freedom about what or how you test which you can choose on the basis of seeing the data, the tests don't work as they're supposed to. In particular, your claimed significance levels are higher than stated (or equivalently, the correct p-values will be larger than they appear to be for a given significance level).