At what point can very large samples, as a proportion of the population, be treated as a census? For example, if your sample contains 90% of the population units, can one dispense with inferential statistical inferences? What about 80%? And so on . . ..
Asked
Active
Viewed 76 times
4
-
Consider looking at this previous question and answer: http://stats.stackexchange.com/questions/14323/when-the-population-has-a-known-fixed-size-are-tables-for-the-t-statistic-wrong. This concept should generalize to any near census tests. – russellpierce Mar 14 '14 at 17:44
-
2I think it depends on how you define "population". By "population", if you mean a collection of observations that can never be sampled again (representing the universe of obs in its entirity), then yes you could use a finite population correction factor to shrink the stnd error on sample estimates as n approaches N. However if your "population" can be considered a random sample from a much larger theoretical population of observations, then inferential statistics still apply without an fpc. – RobertF Mar 14 '14 at 17:47
-
4It depends. In the US there is a decennial political debate about whether the Bureau of the Census can be allowed to use statistical sampling procedures to make estimates, rather than attempting to perform a complete census. The argument against attempting a census is that the bias introduced by nonresponse and other missing information, even with a 99+% census, is too large to be ignored. In general some questions cannot be answered without a 100% census, such as "what is the largest value in the population?" – whuber Mar 14 '14 at 18:11
-
@whuber, maybe we shouldn't be asking questions that cant be answered by sampling. – Aksakal Nov 19 '18 at 20:01
-
Possible duplicate of [When the population has a known fixed size, are tables for the t statistic wrong?](https://stats.stackexchange.com/questions/14323/when-the-population-has-a-known-fixed-size-are-tables-for-the-t-statistic-wrong) – mkt Sep 11 '19 at 11:55