"14 people died already so we shouldn't see any more" is simply a (very slightly) more sophisticated version of the gamblers fallacy.
This is essentially a confusion between a marginal and a conditional probability. The probability that you see more than 14 when you haven't seen any yet may be low, but the probability that you see at least one more when you already have 14 may be quite high.
Consider (as a rough model) that ten thousand people each have p=1/1000 of dying from this disease in one year (occuring randomly over the year), so that in a year you expect 10 people to die with the condition (there's about a 92% chance that you'll see no more than 14).
Now, however, imagine that in the first seven months of the year you got 14 deaths. You should not expect any more deaths, right? Well, no, at this point you should expect about 4 additional deaths in the remaining 5 months.
Indeed, having so many in 7 months should also make you question whether the p=1/1000 is correct (or whether the size of your target population might have been off), and in that case, the expected number of additional deaths may be higher still.
If you're considering the risk to a single person, their probability of dying from that disease in that remaining 5 months if you knew nothing about them other than that they were in the target population would be about $0.001 \times 5/12$ (assuming $p$ was correct).
(this discussion requires an odd assumption, essentially that death occurs suddenly, but rather than labor the point over the fact that it might take several months or even years -- let alone people who undergo remission -- we could cover the issue more simply by assuming that instead of death we were discussing diagnosis)
However in your discussion, you mention "I have a malignant disseminated cancer" ... that means you aren't a random person from the population. Such a diagnosis would certainly impact the chances that you die from cancer ($P(\text{dies from cancer}|\text{aged }50) \neq P(\text{dies from cancer }|\text{ aged }50\text{ and diagnosed with cancer})$
In your second example your question suggests the same error. A priori you expect 1% to have the rare condition, but if you eliminated 90% of them by randomly selecting those that are tested, the remainder would each still have a 1% chance of having the condition. 1% of 20 is 0.2. There's about an 82% chance none of them have the rare condition ($0.99^{20}$).
[If those that were eliminated were not chosen randomly (e.g. only the most healthy-looking were tested) then the calculation doesn't work; in that case you may indeed expect to have a good deal nearer to 2 in the remainder.]