As I see it, there are two basic problems with observational studies that "control for" a number of independent variables. 1) You have the problem of missing explanatory variables and thus model misspecification. 2) You have the problem of multiple correlated independent variables--a problem that does not exist in (well) designed experiments--and the fact that regression coefficients and ANCOVA tests of covariates are based on partials, making them difficult to interpret. The first is intrinsic to the nature of observational research and is addressed in scientific context and the process of competitive elaboration. The latter is an issue of education and relies on a clear understanding of regression and ANCOVA models and exactly what those coefficients represent.
With respect to the first issue, it is easy enough to demonstrate that if all of the influences on some dependent variable are known and included in a model, statistical methods of control are effective and produce good predictions and estimates of effects for individual variables. The problem in the "soft sciences" is that all of the relevant influences are rarely included or even known and thus the models are poorly specified and difficult to interpret. Yet, many worthwhile problems exist in these domains. The answeres simply lack certainty. The beauty of the scientific process is that it is self corrective and models are questioned, elaborated, and refined. The alternative is to suggest that we cannot investigate these issues scientifically when we can't design experiments.
The second issue is a technical issue in the nature of ANCOVA and regression models. Analysts need to be clear about what these coefficients and tests represent. Correlations among the independent variables influence regression coefficients and ANCOVA tests. They are tests of partials. These models take out the variance in a given independent variable and the dependent variable that are associated with all of the other variables in the model and then examine the relationship in those residuals. As a result, the individual coefficients and tests are very difficult to interpret outside of the context of a clear conceptual understanding of the entire set of variables included and their interrelationships. This, however, produces NO problems for prediction--just be cautious about interpreting specific tests and coefficients.
A side note: The latter issue is related to a problem discussed previously in this forum on the reversing of regression signs--e.g., from negative to positive--when other predictors are introduced into a model. In the presence of correlated predictors and without a clear understanding of the multiple and complex relationships among the entire set of predictors, there is no reason to EXPECT a (by nature partial) regression coefficient to have a particular sign. When there is strong theory and a clear understanding of those interrelationships, such sign "reversals" can be enlightening and theoretically useful. Though, given the complexity of many social science problems sufficient understanding would not be common, I would expect.
Disclaimer: I'm a sociologist and public policy analyst by training.