To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
We can model the linear regression as $Y_i \sim N(\mu_i, \sigma^2)$ independently over i, where $\mu_i = a t_i + b$ is the line of best fit. Greg's way is to use vector notation.
We can rewrite the above in Greg's notation: let
$Y = (Y_1,...,Y_n)^{\top}$, $X = \left( \begin{array}{2} 1 & t_1\\ 1 & t_2\\ 1 & t_3\\ \vdots \\ 1 & t_n \end{array} \right)$,
$\beta = (a, b)^{\top}$. Then the linear regression model becomes:
$Y \sim N_n(X\beta, \sigma^2 I)$.
The goal then is to find the variance matrix of of the estimator $\widehat{\beta}$ of $\beta$.
The estimator $\widehat{\beta}$ can be found by Maximum Likelihood estimation (i.e. minimise $||Y - X\beta||^2$ with respect to the vector $\beta$), and Greg quite rightly states that
$\widehat{\beta} = (X^{\top}X)^{-1}X^{\top}Y$.
See that the estimator $\widehat{b}$ of the slope $b$ is just the 2nd component of $\widehat{\beta}$ --- i.e $\widehat{b} = \widehat{\beta}_2$
.
Note that $\widehat{\beta}$ is now expressed as some constant matrix multiplied by the random $Y$, and he uses a multivariate normal distribution result (see his 2nd sentence) to give you the distribution of $\widehat{\beta}$ as
$N_2(\beta, \sigma^2 (X^{\top}X)^{-1})$.
The corollary of this is that the variance matrix of $\widehat{\beta}$ is $\sigma^2 (X^{\top}X)^{-1}$ and a further corollary is that the variance of $\widehat{b}$ (i.e. the estimator of the slope) is $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ i.e. the bottom right hand element of the variance matrix (recall that $\beta := (a, b)^{\top}$). I leave it as exercise to evaluate this answer.
Note that this answer $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ depends on the unknown true variance $\sigma^2$ and therefore from a statistics point of view, useless. However, we can attempt to estimate this variance by substituting $\sigma^2$ with its estimate $\widehat{\sigma}^2$ (obtained via the Maximum Likelihood estimation earlier) i.e. the final answer to your question is $\text{var} (\widehat{\beta}) \approx \left[\widehat{\sigma}^2 (X^{\top}X)^{-1}\right]_{22}$. As an exercise, I leave you to perform the minimisation to derive $\widehat{\sigma}^2 = ||Y - X\widehat{\beta}||^2$.