2

I have data in 4 categories, but they are not completely independent. To explain the data: I have looked at 40 Intelligent Tutoring Systems (ITSs) and determined which approach they use (model tracing, constraints, example tracing or tests) and the frequency of aspects they diagnosed.

I'll provide a subset of the data:

Model tracing tutors 23

  • Correctness 23
  • Buggy rules 14
  • Type of error 12
  • Difference 2
  • Preference 2

Constraint-based tutors 11

  • Correctness 11
  • Type of error 8
  • Buggy rules 5
  • Preference 3

When looking at the data, it seems that the groups are different. For example, that model tracing ITSs are more likely to diagnose buggy rules than constraint-based ITSs. I'm not sure which test to use to determine whether this difference is significant.

My problem is that the categories partially overlap, there are 5 ITSs that are in both the model tracing and constraints group. Another problem is that the categories do not all have the same aspects, e.g. the constraint-based tutors do not diagnose Difference.

Could anyone help me out?

Ferdi
  • 4,882
  • 7
  • 42
  • 62
Renate
  • 21
  • 1
  • I have a somehow similar question with no answers: https://stats.stackexchange.com/questions/350754/dealing-with-multiple-cases-per-subject – Viktor Jun 20 '18 at 14:37

0 Answers0