2

I am doing some hypothesis testing on the results of an electrophysiology experiment. I have electrical recordings of the rate of fire from two different populations of neurons (let's call them neurons type A & B - this my between subjects factor). Complicating things, I have recordings for each neuron both before and after the application of a drug (conditions PRE & POST - my first within subjects factor) and during injections of different levels of current (levels 1-13 - my second within subjects factor). I am interested in determining whether the rate of fire of the two types of neurons are differently altered by the drug, regardless of the current level, i.e. I want to test against the null hypothesis that there is no interaction between the type of neuron (A or B) and the presence of the drug (PRE vs. POST) across current levels.

After exploring the data, it was clear that the assumptions of an ANOVA are not being met. (Sphericity is definitely violated.) After looking around a bit more, I decided that the best non-parametric option for testing this type of mixed within/between subjects design was a permutation test. In particular I stumbled across the ez package for R, which seems to provide a very nice, intuitive interface for testing exactly what I want to test. Here is my code for example:

library(ez)
ratedata         = read.csv("Processed_acute_data_all.csv")
ratedata$cellid  <- factor(ratedata$cellid)
ratedata$type    <- factor(ratedata$type)
ratedata$drug    <- factor(ratedata$drug)
ratedata$current <- factor(ratedata$current)
rateperm         <- ezPerm(data=ratedata, dv=rate, wid=cellid, within=c(current,drug), between=type, perms=1000)

Okay, great, seems to work. I get back a P-value for the various interaction terms, including the one I'm interested in.

My question is this: how should I actually report this in a publication? I've got a P-value, and that's it - no statistic, no effect size, etc. Should I just report the P-value and the number of permutations? Or should I calculate some stat or effect size as well? I'm afraid that in my discipline (neurophysiology) use of these tests is infrequent enough that I am not aware of any conventions with respect to this. I've looked at other posts on this site, but they don't seem to address this seemingly (?) simple issue. Any help would be appreciated.

  • Never report a P-value without a sensible display or report of the effect size. A simple graphical display of all of the before and after currents for each type of cell would probably be a good place to start for such a display. – Michael Lew Jun 08 '16 at 01:27
  • I'm trying to understand your experiments. You have a voltage--current curve for each neurone? Then you have the curve before and after the application of a single concentration of a drug? – Michael Lew Jun 08 '16 at 01:28
  • @MichaelLew: yes, I suppose a graphical display of the data could compensate for my lack of explicit effect size. Regarding your second comment: That's roughly correct. I have a current-rate curve for each neuron (that is, spike rate at different injected current levels), before and after the application of a drug in a single concentration. Additionally, I have recorded from two different types of neurons (I call them A & B above, but for the record, they are virus infected neurons vs. uninfected neurons). – tyrell_turing Jun 08 '16 at 19:18

0 Answers0