It would be possible in the case where the original model was heteroskedastic and the heteroskedasticity was related to the covariates. For example,
$y_i \sim \text{N}(x_i^T\beta, \sigma^2x_{i,1}^2)$
where the variance of the $i^{th}$ observation is proportional to the square of the first covariate.
One can imagine, in non-Normal regression situations, similar structures that don't require heteroskedasticity per se. However, the default assumptions of regression models in general involve the errors being independent of the regressors, and the variance of the errors being constant.
On the other hand, if doing, for example, Poisson regression, you're out of the linear model world, but since the variance of the "error" is proportional to the mean, it follows that it is related to the covariates, and such a logistic regression would work - although it would convey no information not already conveyed by the results of the Poisson regression, which fully specify the conditional distributions of the $y_i | x_i$. In the generalized linear / additive model framework, where the likelihood is fully specified, the only way you'll be able to add information to the initial regression by using the logistic regression you suggest is if the initial regression has misspecified (usually by ignoring) the structure of the residuals, e.g., ignored the heteroskedasticity in the linear model presented above.
Nonetheless, your suggestion might reveal something about the structure of the residuals in an exploratory analysis. I suspect, though, that effectively discretizing the residuals by $< T$ or $\ge T$ would usually decrease the information content of them more than it would help clarify the analysis - unless it was an outlier analysis, perhaps.