Simulating new data with the same mean/sd as your data and doing analysis on that is effectively the same as arbitrarily increasing your sample size while keeping your point estimates the same, thus increasing power, all while ignoring the fact that the mean/sd you've matched was based on only 25 observations - not a very diligent practice.
First off, if your data is normally distributed, then the $p$-values from ANOVA/regression are valid, since the sampling distribution of your estimates will still be normal, regardless of the sample size. More generally, you could consider non-parametric bootstrap resampling to get confidence intervals:
http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
Essentially, you would re-sample from your data with and re-estimate a parameter (e.g. regression coefficients) repeatedly (say 1000 times) and treat these parameter estimates as draws from the sampling distribution of $\hat{\beta}$. You can then use the empirical quantiles (e.g. using the 2.5th and 97.5th percentiles) of this sample to construct confidence intervals.
As a side note, if the effects are large, and you can make a rational story from them you shouldn't over-emphasize the ability to formally "prove" your claims statistically just because your data is non-normal or your sample size is insufficient to invoke the central limit theorem.