Suppose you have a binary random variable $Y$, and several other random variables $X_1,...,X_p$. Your goal is to "predict $Y$ using $X_1,...,X_p$." So, you go ahead and fit logistic regression, which is trying to estimate $P(Y=1 | X_1,...,X_p)$. However, when you evaluate the results of your logistic regression, you find that the AUC is terrible.
One possibility is that your model is "bad," i.e. your estimates of the coefficients for $X_1,...,X_p$ are incorrect. Perhaps you did not have enough samples, or the model was specified incorrectly etc. However, it also occurs to me that your model could be "perfect," but simply that the $X_1,...,X_p$ are not "very predictive."
By this, I mean that suppose $P(Y=1)=0.1$, but when you condition on the random variables $X_1,...,X_p$, the (true) probability does not change very much. For example, $$0.05\leq P(Y=1|X_1=x_1,...,X_p=x_p) \leq 0.2$$
for all $x_1,...,x_p$. In other words, $Y$ is almost independent of $X_1,...,X_p$. So, even if the logistic regression model perfectly learns the conditional expectation, the AUC could still be very bad!
Is there a name for this situation? And is there a way to detect it? How can I tell that my conditional expectations are "correct," even though my predictive ability is so poor? Or am I not even phrasing the problem correctly statistically?