I agree with @Michelle. In general, experimental control allows for causal inferences, but statistical control does not. In principle, statistically controlling for all confounding variables would allow you to make valid causal inferences, but in practice you have two problems:
First, fishing through a lot of different candidate predictors, and fitting lots of different models to find demographic variables that, having been controlled for, 'improve' the picture, will lead to substantial errors. For one thing, p-values (should you care about that) will be inaccurate, for example, the p-value returned by your software might be <.05, but the real p-value would be much higher. For another, your parameter estimates would be badly biased. I discussed this issue in a related way here, which may help make clear what is going on.
Second, even if you pick out variables that really are ones you need to control for, and none that you don't, you still have the problem of endogeneity, because you have no way of ensuring that you have controlled for all such variables. (In this, I am assuming, based on your question, that you are conducting an observational study with secondary data.)
This situation is very unfortunate and very common. With respect to the first issue, in general, my advise would be to pick out a single model to fit, based on information other than your data. Another approach is to split your data into several groups at random (with N=40k, I should think you have plenty), explore one subset, and test candidate models on a different subset. With respect to the second issue, an instrumental variables approach may be your best bet.