A often stated rule-of-thumb to calculate the number of degrees of freedom for a chi-square goodness-of-fit test (based on the Pearson chi-square test statistic) is calculated as the number of categories, minus one, minus the number of parameters inferred from the data.
I understand that every time we infer something from the sample that adds uncertainty and you lose a degree of freedom, but what I do not understand is why you would adjust the degrees of freedom downward if the end result is to make the critical value smaller (and thus the test less conservative). I would think that when we add uncertainty, the test should get more conservative, not less conservative. This seems to be the opposite of how inference works on other distributions, such as the t-distribution, where the tails get fatter when there are fewer degrees of freedom (thus taking a larger t-score to reject the null).