A few things to clarify here. First, we start with the model $Y_i = X_i\beta + e_i$, and we observe $i=1,\dots,n$. In matrix notation, we write $y = X\beta + e$, where $y \in \mathbb{R}^{n\times1}$, $X \in \mathbb{R}^{n\times k}$, $\beta \in \mathbb{R}^{k \times 1}$, and $e\in \mathbb{R}^{n\times 1}$. So $e$ is a vector of $e_i$'s, and we further have that $e = y-X\beta$.
So right away, we see that $E[e'e] = E[\sum_{i=1}^n e_i^2]$, and is a real value, and is typically called the sum of squared residuals (RSS), but especially when using the estimated residuals, it is sometimes also called SSE as you have. Minimizing it is one way to derive the 'best' estimator for $\beta$, $\hat{\beta}$. In contrast, $E[ee'] \in \mathbb{R}^{n\times n}$ is the variance-covariance matrix of residuals with entries $(i,j)$ of $E[e_ie_j]$ (I used that $E[e_i] = 0$ for any $i$ here). So as you see, there is a huge difference between the two.
So what do we do with $E[ee']$? Well in general, it's not super easy to move forward (though you certainly can), but under the Gauss Markov assumptions, we assume that $e_i$ and $e_j$ are uncorrelated for any $i\neq j$, and further assume homoskedasticity, so that $E[ee'|X] = \sigma^2 I$, where $I$ is the $n\times n$ identify matrix. This immediately gives us that $Var(e_i|X) = \sigma^2$ for any $i$. So when you write $Var(e) = \sigma^2$, you need to a bit careful there, as $e$ is a vector. But what I wrote here should make that difference clear.
Given this assumption, let's revisit $E[e'e] = E[\sum_{i=1}^n e_i^2]$. Recalling that $E[e_i] = 0$, we have
$$SSE = E[e'e] = E[\sum_{i=1}^n e_i^2] = \sum_{i=1}^n E[e_i^2] - E[e_i]E[e_i] = \sum_{i=1}^n Var[e_i] = n\sigma^2$$
and so certainly, $n\sigma^2 = SSE \neq Var(e_i|X) = \sigma^2$, but you can see that dividing SSE by $n$ will indeed give you the desired result.
Finally, let's look at $Var(\hat{\beta})$ (I've been a bit lax about when we are in estimator world versus population data world, but hopefully shouldn't cause much confusion). We have that $\hat{\beta} = (X'X)^{-1}y$, and it can be shown (see How to derive variance-covariance matrix of coefficients in linear regression for a proof), given our assumptions about the residuals, that
$$Var(\hat{\beta}|X) = \sigma^2(X'X)^{-1} = E[ee'|X](X'X)^{-1}$$
which is the variance-covariance matrix of $\hat{\beta}$, and it can be seen that the standard error of each coefficient $\hat{\beta}_j$ is the square root of the j-th diagonal element of the above matrix.
So to put this together, the answer to your question is no. Even though they seem similar, the objects are not even in the same space (some are matrices, others are real values), and are fundamentally different, though related by $\sigma$. Intuitively, and to recap, $E[ee']$ is the variance covariance matrix of residuals, which we assume to be quite a simple matrix compared to what it could generally be by assuming that the covariance of $e_i$ and $e_j$ is zero, and that the variance for any $e_i$ is the same. Even then, the variance of $\hat{\beta}$ includes this covariance matrix of residuals, but multiplies it by $(X'X)^{-1}$, which is a beast worth understanding on its own. Finally, we then have that $E[e'e]$ is a real valued number that is the sum of squared residuals, that is typically most important when trying to minimize it when choosing the estimator $\beta$ that best minimizes this residual.
Hope this helps clear up some confusion!