A useful rule of thumb for logistic regression is to limit yourself to about 1 unpenalized predictor per 15 cases of the minority class. See Section 4.4 of Frank Harrell's course notes, for example. That's when you have a
typical problem in medicine, epidemiology, and the social sciences in which the signal:noise ratio is small.
See for example, this page linked in a comment from kjetil b halvorsen, and this page. If your signal:noise ratio is higher, you can get away with fewer cases per predictor.
I highlighted the word "unpenalized" above because you don't have to throw out all except 1 or 2 of your predictors. A penalized method ("regularization" mentioned in one of the comments) allows you to use more predictors than that rule of thumb.
The regression coefficients of the predictors are penalized to lower magnitudes than they would be in a standard regression, to help avoid overfitting. The penalty that provides best performance is typically chosen by cross-validation. Ridge regression ("L2 regularization") provides coefficients for all predictors. LASSO ("L1 regularization") provides penalized coefficients for some predictors and sets coefficients of others to 0. My guess is that you would be better served by ridge regression here, perhaps after you apply your knowledge of the subject matter to reduce the effective number of predictors. See Harrell's notes for ideas on how to implement data reduction, to cut down on the numbers of predictors without using the outcomes.
For logistic regression, penalization is implemented in the glmnet
package.