2

I am working on a model in r and below is the output of confusionMatrix I got-

              Accuracy : 0.7952          
                95% CI : (0.7775, 0.8122)
   No Information Rate : 0.8388          
   P-Value [Acc > NIR] : 1               

                 Kappa : 0.1149          
Mcnemar's Test P-Value : 5.345e-09       

           Sensitivity : 0.18605         
           Specificity : 0.91229         
        Pos Pred Value : 0.28959         
        Neg Pred Value : 0.85363         
            Prevalence : 0.16120         
        Detection Rate : 0.02999         
  Detection Prevalence : 0.10356         
     Balanced Accuracy : 0.54917         

Can someone explain me what is the difference between P-Value [ACC > INR] : 1 and Mcnemar's Test P-Value in the result above.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Sri Lakshmi
  • 21
  • 1
  • 2
  • I think you will find the information you need in the linked thread. Please read it. If it isn't what you want / you still have a question afterwards, come back here & edit your question to state what you learned & what you still need to know. Then we can provide the information you need without just duplicating material elsewhere that already didn't help you. – gung - Reinstate Monica Mar 26 '19 at 14:14

1 Answers1

0

reference:

confusion matrix: https://en.wikipedia.org/wiki/Confusion_matrix

McNemar's test: https://en.wikipedia.org/wiki/McNemar%27s_test

I guess that p-value is for a binomial test with null hypothesis accuracy = NIR or accuracy <= NIR. P-value equals 1, which means the result is negative.

How to understand McNemar's test of confusion matrix: https://machinelearningmastery.com/mcnemars-test-for-machine-learning/

McNemar's test indicates that these methods have significantly different error rate. P-value suggests that the accuracy of the model is not higher than NIR. In a word, the model is bad. Maybe the problem is due to unbalanced data, but I am not sure.

zqin
  • 120
  • 9