I understand degrees of freedom as the number of things that can independently change. And typically, in coming up with the degrees of freedom, if you have n terms, then you just subtract out the number of things which aren't independent (the typical example being the n-1 degrees of freedom in the sample variance). My problem is that I'm not quite seeing how the SSR degrees of freedom (simple linear regression) is just "1" using the previous way of thinking about it. After a little bit of algebra, I can transform it as:
$SSR=\sum_{i=1}^{n}\left ( \hat{y}_i-\bar{y} \right )^2=\sum_{i=1}^{n}\left[\hat{\beta}_0+\hat{\beta}_1x_i- \left(\hat{\beta}_0+\hat{\beta}_1\bar{x} \right) \right]^2=\hat{\beta}^2_1\sum_{i=1}^{n}\left(x_i-\bar{x} \right )^2$
And, I do see that my $\hat{\beta}_1$ is the only estimator (hence one degree of freedom?), however, I'd like to be able to understand this in a similar context of how I understand the n-1 degrees of freedom of the sample variance, namely "adding up n terms, subtracting out the number of terms that aren't independent" to somehow get 1.