Short: I have a series of joint probabilities (likelihoods) for how likely sample $Q$ belongs to group $K$. I need to compute a p-value describing how "significant" the "top" group is compared to other groups. Something like a likelihood ratio test, but since models aren't nested I don't know how to implement this.
My problem, oversimplfied: Given a new query sample $Q$ with particular values for features a, b, and c, figure out which known group $K$ the sample most likely belongs to given the probabilities each group has for values of those features.
The accepted methodology in my field for doing this is simply: for each group, calculate the product of the probability of observing $Q$'s values for features a, b, and c in group $k$. Because those numbers get tiny I do this on the log scale. After doing this I have a joint probability (what I'm calling a likelihood here) for each group $k$, indicating how likely it is that sample $Q$ originated from group $k$. I now need to put a p-value on this likelihood, to assess how "significant" the "top" group is from the other groups, based on these likelihoods.
It doesn't seem like I can use a typical likelihood ratio test because the models don't seem to be nested. They all use the same features (probability of seeing values for a, b, and c in each population), so the number of "parameters" is the same.
I've looked at 1-6 below, and these make me think I'm not asking the question properly.
Finally, I'd like to implement this in R.