0

I am currently conducting a study on the suicide topic and entering the data analysis period. The Shapiro Wilk analysis reported that all my data failed the normality assumption. Most of the data's p coefficients were under 0.01 (I am using the p = 0.05 for this research). While pondering the result, I decided to conduct a moderation analysis using JASP. I did not modify the data at all because I am not sophisticated enough in using statistical software. So, I analyzed the data after a simple recoding of the raw data. It was pretty surprising to find that the variable of my study function as a moderator variable, just like my hypothesis. My question is: Is it okay to do moderation analysis when the data fail the normality assumption?

need help
  • 101
  • 1
    [There is an argument that normality testing is “essentially useless”.](https://stats.stackexchange.com/q/2492/247274) – Dave Feb 21 '22 at 14:35
  • When you say "all my data failed the normality assumption," it sounds like you did multiple normality tests on different groups of data. If so, there is no question that the comment from @Dave is correct. If you do want to examine normality, it's best to look at the distribution of residuals around the model predictions, for example with what's called a "q-q plot." It would help if you could edit your question to show such a plot. Its possible that your data would be modeled better with some simple transformation (e.g., log), but it's hard to know unless you provide more information. – EdM Feb 21 '22 at 15:56

0 Answers0