I am currently reading over some notes on regression. The notes state that the least squares approach chooses $\hat{\beta}_0$ and $\hat{\beta}_1$ to minimise the residual sum of squares (RSS). It then says the following:
The minimising values can be shown to be
$$\hat{\beta}_1 = \dfrac{\sum\limits_{i = 1}^n (x_i - \overline{x})(y_i - \overline{y})}{\sum\limits_{i = 1}^n (x_i - \overline{x})^2}, \\ \hat{\beta}_0 = \overline{y} - \hat{\beta}_1 \overline{x},$$ where $\overline{y} = \dfrac{1}{n} \sum\limits_{i = 1}^n y_i$ and $\overline{x} = \dfrac{1}{n} \sum\limits_{i = 1}^n x_i$ are the sample means.
The standard error of an estimator reflects how it varies under repeated sampling. We have
$$\text{se}\left[ \hat{\beta}_1 \right]^2 = \dfrac{\sigma^2}{\sum\limits_{i = 1}^n (x_i - \overline{x})^2}, \\ \text{se}\left[ \hat{\beta}_2 \right]^2 = \sigma^2 \left\{ \dfrac{1}{n} + \dfrac{\overline{x}^2}{\sum\limits_{i = 1}^n (x_i - \overline{x})^2} \right\},$$ where $\sigma^2 = \text{Var}[\epsilon]$.
The Wikipedia article on standard error states that the standard error is defined to be
$${\displaystyle {\sigma }_{\bar {x}}\ = {\frac {\sigma }{\sqrt {n}}}}$$
Furthermore, we know that variance is defined to be
$$\operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right]$$
With that said, how exactly does one get that
$$\text{se}\left[ \hat{\beta}_1 \right]^2 = \dfrac{\sigma^2}{\sum\limits_{i = 1}^n (x_i - \overline{x})^2}, \\ \text{se}\left[ \hat{\beta}_2 \right]^2 = \sigma^2 \left\{ \dfrac{1}{n} + \dfrac{\overline{x}^2}{\sum\limits_{i = 1}^n (x_i - \overline{x})^2} \right\},$$ where $\sigma^2 = \text{Var}[\epsilon]$?