In linear regression it is possible to render predictors insignificant due to multicollinearity, as discussed in this question: How can a regression be significant yet all predictors be non-significant?
If this is the case it is possible to evaluate the amount of multicollinearity through for example the variance inflation factor (VIF).
In logistic regression this approach is not possible as far as I understand it. Nevertheless it is very common to do stepwise reduction of the variable space based on significance or to use L1 regularization to reduce the number of predictors and avoid overfit.
In this case is it not possible that you fool yourself and remove variables that might have been significant or that would have had larger beta values just because you include several collinear or highly correlated variables in your variable set? Even if you do this properly using cross- or Bootstrap validation it still intuitively feels like this could happen. Especially in areas where you don't have all the variables beforehand but rather need to build them yourself as is common in a lot of data science today where we have a lot of data available.
Is there any way to avoid this effect or at least evaluate the collinearity of the predictors?