Several points. There really aren't two t-tests (one for equal variances and one for unequal). If the two distributions are normal with unknown means and equal (but unknown) variance, the distribution of the test statistic has a t distribution under the null hypothesis of equal variances. The key point here is that the unknown variance does not figure in the distribution of the test statistic.
With 2 normals, unknown means and unknown but different variances, the obvious test statistic has a distribution that depends upon the ratio of the variances. This is the so-called Behrens-Fisher problem. The distribution of the test statistic under the null hypothesis of equal means depends on parameter values that are not known; so an exact rejection region cannot be constructed.
Walsh's test is basically a fudge. The test statistic makes intuitive sense; and one adjusts the degrees of freedom so that the result more or less follows a t distribution under the null. Which apparently, it does. Interestingly, R's t.test defaults to Walsh.
In any case, what you suggest would be a fudge applied to a fudge, and I am not sure what rationale in the theory of hypothesis testing would justify it. (Nice try, though).
The failure of Behrens-Fisher illuminates an important truth: when the variances are wildly different, comparing means probably does not make a huge amount of sense. If they are fairly close, Walsh will work.
When two distributions have different variances, you basically need to ask yourself what sort of differences between the underlying processes are of interest to you.
Reasonable approaches include:
- A non-parametric test.
- A variance stabilizing transformation
- Possibly a bootstrap estimate of some kind.