The answers you have already gotten are excellent ones, but I'm going to give a (hopefully) complementary answer from the perspective of an Epidemiologist. I really have three thoughts on this:
First, they don't. See also: All models are wrong, some models are useful. The goal is not to produce a single, definitive number that is taken as the "truth" of an underlying function. The goal is to produce an estimate of that function, with a quantification of the uncertainty around it, that is a reasonable and useful approximation of the underlying function.
This is especially true for large effect measures. The "take away" message from a study that finds a relative risk of 3.0 isn't really different if the "true" relationship is 2.5 or 3.2. As @onestop mentioned, this does get harder with small effect measure estimates, because the difference between 0.9, 1.0 and 1.1 can be huge from a health and policy standpoint.
Second, there's a process hidden in most Epidemiology papers. That's the actual model selection process. We tend to report the model we ended up with, not all the models we considered (because that would be tiresome, if nothing else). There are a slew of model building steps, conceptual diagrams, diagnostics, fit statistics, sensitivity analysis, swearing at computers and scribbling on white boards involved in the analysis of even small observational studies.
Because while you are making assumptions, many of them are also assumptions you can check.
Third, sometimes we don't. And then we go to conferences and argue with each other about it ;)
If you're interested in the nuts and bolts of Epidemiology as a field, and how we perform out research, the best place to start is probably Modern Epidemiology 3rd Edition by Rothman, Greenland and Lash. It's a moderately technical and very good overview of how Epi research is conducted.