6

I have analyzed several data curves from a group of patients (16 curves per patient) with different analysis methods and want to test for the agreement of the methods.

So far, I have neglected the potential correlation within the patients and was thus able to compute ICC (of agreement) values, which yielded very reasonable results.

Unfortunately, I have reason to believe that the data are correlated within the patients. I am now looking into mixed linear models and generalized estimating equations, both of which deal with this situation of clustered data.

My question is: Is there any way to calculate something similar to the ICC (or CCC) which tests agreement or what would you guys use as a measurement of concordance between the methods? I guess I can get beta values, but something stronger would be nice, that you can actually attach some statistical significance to.

ttnphns
  • 51,648
  • 40
  • 253
  • 462
user30248
  • 61
  • 1
  • 1
    I've read this question several times and still can't understand the objectives of your analysis. Review the suggested edits here http://stats.stackexchange.com/tags/analysis/info and update your question. – AdamO May 11 '16 at 15:53

2 Answers2

1

What you are doing is introducing multiple comparisons. For a confirmatory analysis, we usually specify our primary analysis and we may fit some post-hoc or secondary analyses not to confirm the prior findings, but to understand limitations in the data. Without any description of the various analyses you have conducted, I can't make any clear recommendations, but I suspect you are applying incorrect methods in several ways.

The intraclass correlation coefficient (ICC) is a measure of proportion of variance within a cluster, and can be used to motivate a mixed modeling approach for analysis of longitudinal or panel data. You seem to describe applying the ICC to individual analyses (such as regression or classification models) which doesn't make sense, and is not in-line with the intended purposes of the model. Concordance correlation coefficient (CCC) is a measure of calibration of statistical risk prediction models which, to be clear, involves a single risk prediction per participant and requires separate test/training datasets. CCC can compare several risk models, but I emphasize: risk models in panel data is very nuanced, and I don't get a sense that that's what you're doing here.

"Agreement" or interater agreement is yet another type of finding which has to do with evaluating several replications of a test applied in a large population. While statistical testing does have some relationship with classification, it is not correct to apply measures of "agreement" in this setting because statistical tests have no source of variability outside of the data themselves. Examples of settings in which agreement would be applied would be in settings where multiple radiologists are classifying different screens as benign versus possible cancer.

So I can't really find a place to begin with your problem aside from reminding you of the correct approach to statistics:

  1. Decide (apriori) on the single analytic approach which measures an outcome of interest in a way that is understandable by the general community

  2. Fit any subsequent models as a way of assessing sensitivity in the first model, such as loss-to-follow-up, unmeasured sources of variation, and/or autoregressive effects. Describe any possible limitations after reporting the main findings..

AdamO
  • 52,330
  • 5
  • 104
  • 209
0

Yes you probably cannot assume data from the same patient are independent but do you detect a within-patient effect (try ANOVA)? If you want to compare across patients you could try to normalise to control for per-patient effects?

If you fit a linear mixed model with the patient as a random effect then you can get significant values from the betas.

ps no idea what ICC is or CCC

pontikos
  • 205
  • 1
  • 5
  • I guess OP' [ICC and CCC](http://www.sciencedirect.com/science/article/pii/S016794730800457X) mean intraclass correlation coefficient and concordance correlation coefficient. – Randel Aug 01 '15 at 16:37