Traditional power analysis is situated within the discovery (or lackthereof) of population parameters based on statistical significance, p = .05 (or some other alpha).
You are correct in saying that you don't really have an alternative hypothesis against which you are testing, so it isn't as straightforward to do a power analysis as in other situations when you are asking, "If there is an effect here, how often am I going to capture it?"
You are onto something in the latter part of your question when you say:
Are there any other tests or metrics that look into how incremental sample sizes impact variance accuracy? This might be used to inform me that after a certain point increasing the sample size is unlikely to change the reliability of my prediction.
Instead of power analysis, which does sample size planning to get a "significant" p-value, you are interested in precision of a parameter estimate.
There's a good paper, Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609-612. doi:10.1016/j.jrp.2013.05.009, that examines sample size planning in the way you are looking at it (.pdf here: https://osf.io/5u6hv/).
They are looking at correlations, whereas you are interested in proportions, but the idea is the same. You could take a similar Monte Carlo approach by simulating different proportions (e.g., A = 40%, A = 75%, etc.) and doing different sample sizes and seeing when the standard errors start to plateau. That could give you the sample size at which you start to get diminishing returns in precision. So I think one way to get at your question is to think of it as looking for precision, not power.
In traditional power analyses (which could test a hypothesis like A < B), we don't really know the effect size, so we often (a) guess or (b) calculate it for many different effect sizes. This gives us an understanding of the range of effect sizes and what our power would be for each. You could also just do power for the smallest effect size you care about or, if you want to be safe, do it at the lower bound of previously-done research (http://journals.sagepub.com/doi/full/10.1177/1745691614528519).