In a traditional one-factor ANOVA with three treatment groups, one assumes that the three populations have equal variance. If sample sizes are hugely different, then the common variance is mainly estimated from the larger sample(s).
If you doubt that all three treatment groups have different variances, then that assumption seems inappropriate. However, there is a Welch version of the one-factor ANOVA that does not assume equal variances. (The main idea is somewhat similar to doing a Welch two-sample t test instead of a pooled two-sample t test.) See R documentation for 'oneway.test'.
Here is an example:
set.seed(626) # for reproducibility of simulated data
x1 = rnorm(60, 100, 10); x2 = rnorm(30, 100, 12); x3 = rnorm(6, 120, 10)
x = c(x1, x2, x3)
boxplot(list(x1, x2, x3), varwidth=T, col="skyblue2")

Boxes above are of different sizes to reflect the different sample sizes among groups.
Here is output from the Welch ANOVA in R:
g = c(rep(1,60), rep(2,30), rep(3,6))
oneway.test(x ~ g)
One-way analysis of means (not assuming equal variances)
data: x and g
F = 28.035, num df = 2.000, denom df = 15.694,
p-value = 6.6e-06
Because of the very small P-value, it is clear that there are significant differences among the group population mean. [A traditional one-factor ANOVA would have had 93 denominator df; the lower denominator df seen here is characteristic of the Welch ANOVA, especially if population variances differ.]
There are various ways to do ad hoc pairwise tests among the three groups.
In this case Group 3 is significantly different from the other two.
With Bonferroni protection against false discovery of differences, we
should declare differences between pairs of treatments, if P-values are below $0.05/3 \approx 0.017.$
t.test(x1, x2)$p.val; t.test(x1, x3)$p.val; t.test(x2, x3)$p.val
[1] 0.1394103 # no sig dif btw Gps 1 & 2
[1] 8.800413e-05 # Gps 1 & 3 differ
[1] 0.0001181582 # Gps 2 & 3 differ
This page discusses the oneway.test
in context.