With ANOVA you compare the variance of the difference between the group means and the variance of the differences within the groups.
For the comparison of variance it doesn't in principle matter what distribution you have. If the distributions are the same then the distribution of the means has the variance of the distribution of the individuals divided by $n$ the size of the groups*.
But... the problem is that variances are estimated based on the residuals (the difference between the observations and the mean).
When you have normal distributed data then these estimates will be $\chi^2$ distributed and that's what is being used to test the hypothesis. (The ratio of the two, a ratio of $\chi^2$ distributed variables, is an F-distribution which is the final measure)
When the data is not normal distributed then the estimates will only be approximately chi square distributed. But how bad that is will depend on the situation.
In the example below we see that there can be some discrepancies. The computed p-value will not be relating to the actual p-value. Low p-values might occur more/less often than what they indicate.
Whether this is a problem depends on the practical situation. If you compute a p-value of 0.02 and it is actually 0.04 did you make a big mistake?
Example:
Say the actual distribution of the data is
- a beta distribution (with $\alpha=1$ and $\beta=0.5$)
- a t-distribution with $\nu = 1$
and we have 5 groups with each 5 members. Let's simulate how often we get which p-values.
In the image below we see how much the p-values deviate from the expectation for one million simulations of a null hypothesis test. For the beta distribution, it does not matter much. For the t-distribution there is a big difference (but we chose an extreme example, the t-distribution with $\nu = 1$ has infinite variance).

*There will be some details like using $n-1$ when you express the distribution of residuals