I have data looking at the performance of thousands of one-vs-all binary classifiers trained for an image recognition task under different conditions.
I'm trying to use lme4
to predict classifier performance (dprime
), given: (featureSet
) a fixed effect of which of 5 feature sets was used to train the classifier; (nTrainingExamples
) a continuous fixed effect of the number of examples used to train the classifier; and (category
) a random effect of the category the classifier was trained to recognize.
So far so good. Based on the GLMM FAQ and the lme4 cheatsheet and this Conjugate Prior post (among many others), I have:
dprime ~ featureSet * nTrainingExamples + (1 | category)
The wrinkle is that I have another factor (split
): I sampled 20 unique training sets for each combination of featureSet
, nTrainingExamples
and category
. I could have sampled many more training sets for each combination, so I think this is a random effect. I have no idea, however, how to describe this sort of nesting.
How do I specify a random effect nested under both fixed and random effects?
I had tried:
dprime ~ featureSet * nTrainingExamples + (1 | category/split)
but that only nests split
under category
, right, and not under the fixed factors?
Whatever the correct model is, is it something that I could appropriately model using a classical ANOVA mixed-effects-models? If so, what's the syntax for specifying this model for aov
?