It looks like there are really two questions in your mind:
- How does the harmonic mean "weigh the smaller number more heavily" than the arithmetic mean?
- Why is this a good idea?
My thoughts:
I would say that is an unfortunate choice of words. It's just a reformulation of the HM<GM<AM inequality, there is no "weighting" involved. (All means can be weighted, but that's a separate question.)
As to this, the book gives no reason, and none come to my mind immediately. It may be a kind of "precautionary principle" - we care about both precision and recall, but in summarizing them we would rather be conservative and be closer to the smaller of the two so we are not led into overoptimism if the larger one is "really" large. (Note that the "weighting the smaller of the two" refers to precision and recall, not their reciprocals as you write at the end of your question.)
Finally, in my opinion we should not care about any $F\beta$ score at all, see Why is accuracy not the best measure for assessing classification models? The same problems apply to sensitivity and specificity, and indeed to all evaluation metrics that rely on hard classifications, so also to all $F\beta$ scores. Instead, use probabilistic classifications, and evaluate these using proper scoring rules.