I have been doing some reading on standard errors and the derivation of that formula. Unfortunately I have hit a snag in my understanding of a variance. Specifically, I am having a hard time understanding the difference between these two notations:
First from the question titled "General method for deriving the standard error", I see the following formula which seems to jive with the derivation found on Wikipedia (Standard Error / Derivations)
$$ Var(\sum\limits_{i=1}^nX_i) =\sum\limits_{i=1}^nVar(X_i)=\sum\limits_{i=1}^nσ^2=nσ^2 $$
Second, the other formula I'm seeing is the standard variance formula:
$$ Var(X) = \frac{1}{n} \sum\limits_{i=1}^n(x_i - \mu)^2 $$
Can someone help me understand the difference in these two equations? The point where I am having an issue is the random variable being operated upon. In the first equation we see an indexed variable. Why do we not see that in the second equation? Ultimately in the context of standard errors of the mean, How do these two definitions relate?