Assuming you want to estimate the mean of a variable with a certain margin of error you can use $E=z_{\alpha/2}\frac{\sigma}{\sqrt{n}}$ where $E$ is the margin of error, $z_{\alpha/2}$ is the normal distribution quantile for the confidence level you want (1.96 for a 95% confidence interval), $\sigma$ is the standard deviation of the variable you're forming the confidence interval for, and $n$ is the sample size.
Obviously you don't know $\sigma$ but you can get a small sample (say n=100) and estimate it with the sample standard deviation to get a first approximation.
In general you can see from this formula that margin of error decreases proportional to $\sqrt(n)$, which should give you some intuition that there's diminishing returns for large n. This "root n" rate is common to anything that behaves like a mean (many parameters including regression coefficients are essentially means). This is all a result of the central limit theorem.
See http://stattrek.com/estimation/margin-of-error.aspx for more exposition.