2

I recognize this may be partly a question for qualitative researchers, but I'm not sure which Stack Exchange is theirs.

The basic context is that we have a series of text excerpts from newspaper articles, and three different researchers conducted their own initial open coding analysis to develop proposal categories and such for further analysis. Let us say for the purpose of simplicity that the researchers only applied one code per excerpt.

I have two questions:

  1. First, as I understand it, most interrater reliability statistics are for cases when the coders are using the same set of codes. Is there an established statistic for comparing the different codes and assessing the amount of agreement? And by "agreement," I don't necessarily mean that the codes match, but rather I want to see if the coders generally discriminated the excerpts along similar lines.
  2. Second, is there an established research process for reconciling the differing coding schemes, or any examples of people who have approached it this way before?

Thanks! Please let me know if I am off track.

ttnphns
  • 51,648
  • 40
  • 253
  • 462
RickyB
  • 951
  • 1
  • 10
  • 21

0 Answers0