The key question lies in modelling versus knowing the true law.
Assume your data obbeys an unknown perfect law $P(y=1|x)=f(x)$. Then the Bayes optimal classifier is "classify y=1 when $f(x)>0.5$". This is true for any law and not related to anything algebraic. In practice you don't know $f$ and you can't, so that the Bayes-optimal classifier is only a theoretical object.
Now, imagine you don't know $f$ but you know that $f(x)=logit^{-1}(\beta x)$ and only ignore $\beta$. This happens only in simulations where you control the underlying true law and hide $\beta$. You estimate it as $\hat\beta$ and you say "classify y=1 when $logit^{-1}(\hat\beta x)>0.5$". This is not Bayes optimal since don't have the exact $\beta$. It is asymptotically Bayes optimal since with infinite training data $\hat\beta=\beta$.
But in a real situation, logistic regression in only a guess for the unknown law and it's always false. You not only ignore the parameter, you also ignore how much logistic regression is a good approximation for the true unknown law. Then logistic regression predictor is not Bayes optimal. Not even asymptotically. Worse: you can't known how far it is to optimality.
There is a case where you can measure this: simulate data with an $f$ that is not logistic and see how good the logistic approximation is. This is not a real situation though.