The consideration in question refers to whether your model, which is going to be an approximation to reality, can still be considered a good approximation for the specified values of the input variables. It's more of a warning about the potential for poor model quality in a region of the data space for which you may have seen little or no data, thus making it possible that the model performs poorly there but you wouldn't know it from the (nonexistent) data. However, it's not a hard-and-fast rule; domain knowledge is important in this assessment.
For example, models of wage growth vs productivity growth and % employment developed using data from a period with more-or-less full employment may be very poor predictors of wage growth given a certain level productivity growth and % employment during a period with high unemployment. More simply, a linear approximation to $y = \sqrt{x}$ isn't bad when the range of $x$ is 1000 to 1001, but will produce very poor estimates when the input value of $x$ is, say, 500.
To the point of your question - if your new data is in a region where it's not clear to you that your model is as good as its overall fit would indicate, the prediction intervals are likely smaller than they ought to be, and should be interpreted with caution. If your new data is in a region where you have grave doubts about the model, best not to make any predictions at all - or load them up with caveats about the model's potential for error if for some reason you have to. (These statements are, of course, rules of thumb, and my own opinions.)