We are planning to collect behavioural data from ~200 children. We're manipulating 12 stimulus pairs. Based on 3 different models, for each stimulus pair there will be 3 different values to represent their characteristics. Our plan is to fit LMM and treat these 3 different values as fixed effect factors, subject as a random effect factor. Then we will see which factor can predict children's behaviour in different conditions (condition A, B, & C).
Because of these models, the 3 different fixed effect factors will have unbalanced number of levels (7, 12, & 12). Hence, there are more trials for factor 1 at some specific levels than factor 2 & 3. Would it cause a problem for such unbalanced design? For example, would more levels of a factor benefit the fitting results?
Also, as we're testing children, we try to make the task duration as short as possible (24 trials for each children in each condition). Thus, there will be only 2 observations at most for many levels, could be less if children cannot respond the task correctly as we only analyse the RTs of correct responses. I was wondering if such few observations at each level could cause problems when fitting LMM to our data set.
I have read a few posts related to few observations when employing LMM. Some people have suggested that it could be alright (like this one: Random intercepts model - one measurement per subject). But I also read an article saying that unbalanced design can lead to false positive (https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00788/full). I'm very new to LMM, the decision to use LMM to fit our data set is based on previous literature. However there were more trials in previous studies as they were testing adults. So I was wondering if LMM is still reasonable for our design, and if there are other better ways for us to compare different models.