How much your model over-fits depends on, among other things, how many observations you have. As a rule of thumb, for observational studies, it gets bad when you have fewer than 10–20 per coefficient estimate (excluding the intercept). But you can cross-validate the full model & see, rather than reduce it just in case. There are two (not mutually exclusive) approaches that help when over-fitting is a problem:
Data reduction. Here's where your expert comes in. It's not just a matter of looking at the names of the candidate predictors: consider the variability or prevalence of each in your sample, which are likely to be measuring much the same thing, which are measured most accurately, which have fewer missing values, &c. And selecting may need to be supplemented, or even supplanted, by combining: from simple averages or differences to principal components of variable clusters. At this stage you're also considering for which ones it's worth allowing for non-linear relationships to the response, or for interactions. Section 4.7.7 here gives a way to use the deviance of the full model to guess how much data reduction will be helpful.
Regularization. The idea of this is to shrink the coefficient estimates to correct for the optimism introduced by overfitting. Ridge regression shrinks estimates for all coefficients towards zero; LASSO shrinks some to zero, thereby performing variable selection; the elastic net combines both procedures. How much to shrink can be guided by cross-validation or a modified version of Akaike's Information Criterion.
Of course model selection is a big topic. The two books I've found most helpful are these:
Harrell (2002), Regression Modelling Strategies
Hastie et al. (2009), Elements of Statistical Learning