My cursory reading of this section gives me the impression that there is some critical conflation of "degrees of freedom" with "multiplicity".
These are only similar insofar as ignoring degrees of freedom corrections as well as ignoring multiple comparisons leads to anti-conservative inference: you are more likely to say that something is significant when it is not. That is a weak connection, at best, because there are virtually infinite settings in which anti-conservative inference can be achieved through poor statistical practice, so each principle should be discussed in its own right.
Degrees of freedom are corrections made to the null sampling distributions imposed on inferential statistics for a single comparison. These corrections result from estimating other parameters, but the actual estimate is inconsequential. For instance, a T-test makes a comparison of mean differences between two groups. The actual mean difference is a useful summary measure, however, without knowledge of the standard error, it is contextually useless to know the mean difference. Thus, we base our inference on the sampling distribution of the Wald statistic: $T = \hat{d} / se(\hat{d})$. Only this standardized difference allows us to use the same methods to compare concentrations of lead in ground water sources (hopefully very small) and the molecular weights of stars (hopefully very large) with similar methods. This Wald statistic requires finite sample correction using the degrees of freedom. I should also add that, to the best of my knowledge, degrees of freedom are never "analyzed" but rather apply to specific analyses, as with the T-test example.
Multiplicity has the concept of spending your alpha level. The authors seem to suggest, incorrectly, that one may apply inference at the usual 0.05 level and that subsequent testing can be performed but at a more stringent level. Technically, to preserve the 0.05 family-wise-error-rate, all subsequent testing must be performed at the 0 level if one inferential comparison has already been made at the 0.05 level. Bonferroni provides a mean of using a similar threshold evenly for any $k$ comparisons ($k > 2$), but you need not use the same threshold for all comparisons. In clinical trials, a "first glance" at data may use a very conservative alpha spending since the optimism is low that sufficient information growth has occurred. Subsequent comparisons may be more "targeted" in using a larger proportion of the total alpha level.
Estimation and inference are fundamentally different things. It's utterly important to be careful and considerate about why and when a p-value should be presented, if at all, in the results. Corrections for degrees of freedom ensure that even one p-value is correct. But if more than one hypothesis is presented, adjustment for multiple comparisons is not achieved by any correction to the degrees of freedom, but rather using different significance thresholds so that the family wise error rate or false discovery rate is conserved.