Using classical linear model assumptions, we know that $$\frac{\hat \beta_j - \beta_j}{se(\hat \beta_j)} \sim t_{n-k-1}$$ meaning that the ratio of regression coefficients to their standard error follows a t-distribution with $n - k - 1$ degrees of freedom, where $k$ is the number of slope parameters.
This means that the T-score from a 2-sample T-test for samples that have a different number of observations and different sample variance, $$T = \frac{\bar x_1 - \bar x_2}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}},$$
will be the same as $$T = \frac{\hat \beta_1}{se(\hat \beta_1)}$$ for linear regression $Y = \beta_0 + \beta_1D$ where $D$ is a binary dummy variable that takes the values of 0 or 1 depending on if an observation is in group 2 or not.
Since $\hat \beta_1$ will represent the difference in means between the two groups, this indicates that the standard errors must be the same for the T-score to be equal.
So my question is: How can I prove the standard errors identical in this case? In other words, how do I mathematically prove: $${\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}} = se(\hat \beta_1)$$