2

I am working on a high profile manuscript which will go into a journal oriented on translational or medical science. In the paper, several ROC curves have been presented to show the performance of machine learning models in detecting a particular disease. As estimates of the performance, I have given AUC values and confidence intervals; p-values have been added to a supplementary table.

I now have been criticized for "unconventional" presentation and "using latest guidelines", which was, for me, surprising. I thought that showing CI's when estimating was the convention and has been for a while (decades?) now, and many better and more "unconventional" replacements for CI have been proposed.

The argument is, we should use the p-values, because that is what the reviewers expect. I think that this is incorrect, since we are showing estimates, rather than results of planned comparisons. What would you recommend? If CI, than what recommendations / papers should I present in favor of this view?

amoeba
  • 93,463
  • 28
  • 275
  • 317
January
  • 6,999
  • 1
  • 32
  • 55
  • In my experience, a good peer review can substantially improve the manuscript and save you from embarrassment when someone spots your mistakes later. If you do not feel that the reviewers' expertise matches the area of main development in your paper, I'd suggest to try a different journal - there's certainly a number of high-profile options in medical science. – juod Jun 06 '17 at 11:13

1 Answers1

4

Reviewers are just being cranky and wanting to see what they have always seen. Science is always evolving and changing; this makes people uncomfortable. What you have presented is "unconventional" in a sense that it does not use methods developed in the early 1900s that—for some reason—people still use. It is not "unconventional" to anyone who has read a statistics book published in the last 5 years (which will certainly be very view reviewers). But enough of my rant...

I am in psychology, where there has been a push against p-values for a long time, but it has recently picked up more steam. If you want people praising confidence intervals, Geoff Cumming does that in this highly-cited paper as well as in his book. The Term "New Statistics" is misleading, since it still relies on the frequentist perspective and has been used for decades. I would check out people who cite him and talk about confidence intervals; Cumming is probably the biggest proponent of effect sizes and confidence intervals I've read.

I am not at my computer, but John Ioannidis also has some work on how p-values are very fickle.

And, as always, you have the Bayesians that will put down p-values, but since you are still operating under a frequentist paradigm (i.e., using confidence intervals), it probably isn't appropriate to cite them.

Mark White
  • 8,712
  • 4
  • 23
  • 61
  • 2
    +1. January might also want to cite recent ASA statement http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108 (see also https://stats.stackexchange.com/questions/200500). – amoeba Jun 06 '17 at 13:33
  • 1
    I did. My co-authors replied with saying something about the "latest guidelines", as if that was a novel concept just last year embraced by some weird group of statisticians. – January Jun 06 '17 at 14:00