Here is an argument for why we divide by degrees of freedom, in a simple case. Let $X_1, X_2, \cdots X_n$ be independent and identically distributed with mean $\mu$ and variance $\sigma^2$. Consider the sample variance as an estimator for $\sigma^2$.
$$S^2 = \frac{1}{n-1}\sum_{i=1}^n(X_i - \bar{X})^2 = \frac{1}{n-1}\left(\sum_{i=1}^nX_i^2 - n\bar{X}^2\right)$$
We can show that $S^2$ is unbiased for $\sigma^2$.
\begin{align*}
E(S^2) &= \frac{1}{n-1}\left(E\left(\sum X_i^2\right) - nE(\bar X^2)\right) \\
&= \frac{1}{n-1}(nE(X_i^2) - nE(\bar X^2)) \\
&= \frac{1}{n-1}(n(\mu^2 + \sigma^2) - n(\mu^2 + \sigma^2/n)) \\
&= \frac{1}{n-1}(n\sigma^2 - \sigma^2) = \sigma^2
\end{align*}
Note that if we had divided by anything other than $n-1$, this estimator would be biased. In fact, it can be shown that $S^2$ has uniformly minimum variance of all unbiased estimators of $\sigma^2$ in many cases.
On the other hand, assume that $\mu$ is known (not very useful in practice). Now the estimator
$$\hat\sigma^2 = \frac{1}{n}\sum_{i=1}^n(X_i-\mu)^2$$
is unbiased. This is justified, since we are not losing a degree of freedom to estimate $\bar{X}$.
This is just one simple case where dividing by degrees of freedom makes sense, but the justification works for other statistics as well.