It helps to disambiguate the meaning of "accuracy" more precisely like this Reddit comment, in which there's a typo: "1485" (in "out of 1485 people who test positive") ought be "1495". I rewrote it with whole numbers (rather than 0.5% as the disease rate).
To understand the theorem, you need to understand the vocabulary. "99% accurate" doesn't really give us information about the disease. We ought use the following terms:
Sensitivity - the odds that the test will be positive if you have the disease.
Specificity - the odds that the test will be negative if you lack the disease.
Positive predictive value - the odds that the test will correctly predict you have the disease, if you test positive.
Negative predictive value - the odds that the test will correctly predict you lack the disease, if you test negative.
Our population of 10,000 people has a 1% disease rate. So $\color{springgreen}{1000}$ people have the disease, and $\color{forestgreen}{99,000}$ don't.
We introduce a test that is 98% sensitive and 99% specific. It will correctly identify $\color{deepskyblue}{980}$ of 1000 people with the disease and $\color{red}{98,010}$ of 99,000 without the disease. It will incorrectly claim $\color{red}{20}$ people ($ = 1000 - 980$) with the disease don't have it, and $\color{deepskyblue}{990}$ people ($= 99,000 - 98,010$) without the disease have it.
So out of $\color{deepskyblue}{1970 \; (= 980 + 990)}$ people who test positive, 980 have the disease. Thus, our positive predictive value is $\dfrac{\color{springgreen}{1000}}{\color{deepskyblue}{1970}} = 50.76\%$.
Out of $\color{red}{98,030 \; (= 98,010 + 20)}$ who test negative, $\color{forestgreen}{99,000}$ do not have the disease. Thus, our negative predictive value is $\dfrac{\color{red}{98,030}}{\color{forestgreen}{99,000}} = 99.02\%$.
In this case, this test is first-rate for determining who lacks the disease. The $\color{deepskyblue}{1970}$ who test positive can be tested to confirm they do have the disease, whereas those who tested negative need no further tests.