If I have a population of exactly 100.000 voters, and I know they all voted in either candidate A, B, C, D or E. (e.g where the results of a random sample of 100 voters are A = 40%, B = 30%, C = 10%, D = 10%, E = 10%).
If I incrementally increase my sample size to 200 people, count the votes, then to 300, 400, etc, I get a more accurate prediction with increasing sample size. However, once larger samples are more costly; what test/metric can I use on random samples to determine what is the smallest sample size that allows me to say:
"with this population size (voters) and this number of options (candidates), this sample size of x has a 95% probability of detecting the winner (e.g A>B)"
Common sense tells me that if the final election result is A=50% B=49%, the sample needed to find the winner will be much bigger than if it is A=90%, B=9%. But I think there must be a test that looks into how incremental sample sizes impact the results, and might be used to inform me that after a certain point increasing the sample size is unlikely to change the reliability of my prediction.