Possible Duplicate:
Comparing two classifier accuracy results for statistical significance with t-test
I coded two Naives Bayes Classifiers (using different features of the same data) and used incremental k-fold cross-validation.
As output I have computed (for each of the two NBC): For each k training set sizes: Average accuracy / Standard Error
Upon observing this data I formulate the hypothesis that one of the two has better performance (in terms of accuracy).
How can I assess the significance of my results in terms of whether I should reject the Null hypothesis in favor of my hypothesis or not?
Thanks for any help :)