I did research on the effect of a certain score on a cognitive test (range 0-100) on performance on another test years later (ordinal categorical variable, low-mid-high). This was done over the course of six years for approximately 200 participants, i.e. they were measured yearly on the cognitive test.
In my confirmatory analysis, I wanted to see whether the mean of the cognitive score of a player (over the years) could significantly predict the performance on another test (low-mid-high). Also, I wanted to see for every age cohort (12-13, 13-14, 14-15 y/o and so on) if the cognitive measurement at any age specifically could significantly predict future performance.
So, for every regression analysis, there is one IV and one DV, and due to supervisor demands I really had to perform one overall (mean) + six separate age cohort regression analyses, and could not take age into account as a predictor so I could make one model.
FYI, all described analyses are confirmatory. Now, I was told I have to correct for multiple comparisons due to the fact that I have performed seven separate regression analyses, one with the mean of all cognitive scores achieved over the years per participant, and one for every one of the six age cohorts. My question is: can I just use Bonferroni, so divide the threshold p-value by the number of regression analyses I ran? If so, should I do this only for the six age cohort analyses (so threshold p = 0.05/6 = 0.008), or should I also take into account the overall regression with the mean of all yearly scores per participant, and thus divide by 7 (and thus threshold p = 0.05/7 = 0.007)?
Is there another, better correction for this multiple testing? If so, how does it work? And if Bonferroni is actually okay, why would 6 or 7 be 'better'?