A couple of years ago, I would have fully subscribed to @Michael Chernick's answer.
However, I realized recently that some implementations of the t-test are extremely robust to inequality of variances. In particular, in R the function t.test
has a default parameter var.equal=FALSE
, which means that it does not simply rely on a pooled estimate of the variance. Instead, it uses the Welch-Satterthwaite approximate degrees of freedom, which compensates for unequal variances.
Let's see an example.
set.seed(123)
x <- rnorm(100)
y <- rnorm(100, sd=0.00001)
# x and y have 0 mean, but very different variance.
t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 0.9904, df = 99, p-value = 0.3244
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09071549 0.27152946
sample estimates:
mean of x mean of y
9.040591e-02 -1.075468e-06
You can see that R claims to perform Welch's t-test and not Student's t-test. Here the degree of freedom is claimed to be 99, even though each sample has size 100, so here the function essentially tests the first sample against the fixed value 0.
You can verify yourself that this implementation gives correct (i.e. uniform) p-values for two samples with very different variances.
Now, this was for a two-sample t-test. My own experience with ANOVA is that it is much more sensitive to inequality of variances. In that case, I fully agree with @Michael Chernick.