I saw the following statistic on TV a few years back:
53% of voters are going to vote for Mitt Romney over Barack Obama (Error: 3%, sample size: 300, survey conducted via phone)
After seeing this, I wrote the following in my notes:
Let's say that a politician is polling at a 53% rating, and the "error" (Standard Error) is 3%. Does this mean that the politician is actually polling between 50% and 56%? No. It actually means that 95% of the votes are between 47% and 59% (because a +- 2 standard deviation span contains 95% of incidents), and actually only 67% of the votes are between 50% and 56% (because a +- 1 standard deviation span contains 67% of incidents)
Questions:
Did I interpret it correctly? If not then what is the correct (or a better) interpretation?
I understand how Standard Error is calculated in a Linear Regression. But how is the Error calculated in a situation like this where one is simply asking 300 people whether they're going to vote for A or B?
(please feel free to improve my tags.)