You definitely need the intercept. Say you have two binary $X$'s and a binary $Y$. Without the intercept, your model is
$\log(p/(1-p)) = \beta_1 X_1 + \beta_2 X_2$.
When both $X$'s are at their reference levels, you get
$\log(p/(1-p)) = \beta_1 0 + \beta_2 0 = 0$.
This implies that $p = 0.5$ when both of your independent variables are at their reference levels. So unless you have super strong rationale for forcing a 0.5 probability on "success" for your dependent variable when your independent variables are at their reference levels, you should include the intercept in your model.
(On the other hand, if you do not "leave one out" in your modeling of the dummy and categorical $X$ variables, then you need to exclude the intercept to avoid a perfect multicollinearity. In this case the problem noted above vanishes.)