The issue you're alluding to is the 'approximate unidimensionality' topic when building psychological testing instruments, which has been discussed in the liturature quite a bit in the 80's. The inspiration existed in the past because practitioners wanted to use traditional item response theory (IRT) models for their items, and at the time these IRT models were exclusively limited to measuring unidimensional traits. So, test multidimensionality was hoped to be a nuisance that (hopefully) could be avoided or ignored. This is also what led to the creation of the parallel analysis techniques in factor analysis (Drasgow and Parsons, 1983) and the DETECT methods. These methods were --- and still are --- useful because linear factor analysis (what you are referring to) can be a decent limited-information proxy to full-information factor analysis for categorical data (which is what IRT is at its core), and in some cases (e.g., when polychoric correlations are used with a weighted estimator, such as WLSMV or DWLS) can even be asymptotically equivalent for a small selection of ordinal IRT models.
The consequences of ignoring additional traits/factors, other than obviously fitting the wrong model to the data (i.e., ignoring information about potential model misfit; though it may of course be trivial), is that trait estimates on the dominant factor will become biased and therefore less efficient. These conclusions are of course dependent on how the properties of the additional traits (e.g., are they correlated with the primary dimension, do they have strong loadings, how many cross-loadings are there, etc), but the general theme is that secondary estimates for obtaining primary trait scores will be less effective. See the technical report here for a comparison between a miss-fitted unidimensional model and a bi-factor model; the technical report appears to be exactly what you are after.
From a practical perspective, using information criteria can be helpful when selecting the most optimal model, as well as model-fit statistics in general (RMSEA, CFI, etc) because the consequences of ignoring multidimensional information will negatively affect the overall fit to the data. But of course, overall model fit is only one indication of using an inappropriate model for the data at hand; it's entirely possible that improper functional forms are used, such as non-linearity or lack of monotonicity, so the respective items/variables should always be inspected as well.
See also:
Drasgow, F. and Parsons, C. K. (1983). Application of Unidimensional Item Response Theory Models to Multidimensional Data. Applied Psychological Measurement, 7 (2), 189-199.
Drasgow, F. & Lissak, R. I. (1983). Modified parallel analysis: A procedure for examining the latent-dimensionality of dichotomously scored item responses. Journal of Applied Psychology, 68, 363-373.
Levent Kirisci, Tse-chi Hsu, and Lifa Yu (2001). Robustness of Item Parameter Estimation Programs to Assumptions of Unidimensionality and Normality. Applied Psychological Measurement, 25 (2), 146-162.