Why LOOCV(Leave-One-Out Cross-Validation) has less bias than K fold Cross Validation ? Please explain with example if possible
Asked
Active
Viewed 281 times
0
-
There's a good explanation at [Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), section 5.1.4 on bias-variance trade-off. There's some examples on chapter 5, none of them focused on your question though. – lrnzcig Apr 05 '16 at 14:34
-
@lrnzcig Yes I have gone through that but did not find my answer. Any further help on this ? – learner Apr 05 '16 at 16:22
-
Ok. Could you be more concrete? What is it you don't understand in that book? Do you understand the main point they are making -using less info when fitting the model, i.e. using less % of samples for training, will lead to a worse estimate of the test error. Thus, "simply", the more % of samples you use, the less bias you have. – lrnzcig Apr 05 '16 at 16:48
-
@lrnzcig I had this point in my mind but not sure how much correctly I interpreted this . Now seems like I am right. Thanks for your suggestion. Are you on social media? lets connect – learner Apr 05 '16 at 17:01
-
You're more than welcome. Take a look to my profile if you like. I'm not very original when choosing ids... – lrnzcig Apr 05 '16 at 17:15
-
@lrnzcig I think stack exchange dont have any option to stay connected. Please take a look at this problem http://stats.stackexchange.com/questions/205652/minimizing-sum-of-variances hope you can solve this. – learner Apr 05 '16 at 17:31
-
1Note that in some cases (particularly with very small sample size), LOO can have a decidedly higher pessimisic bias than a well-chosen k for k-fold CV. E.g., if the classifier incorporates the frequency of the classes, leaving one sample out will always cause the tested class to have less cases in the training set. As you ask for examples: http://hyperspec.r-forge.r-project.org/blob/Beleites2005.pdf – cbeleites unhappy with SX Apr 06 '16 at 09:11
-
You can check out this answer https://stats.stackexchange.com/questions/154830/10-fold-cross-validation-vs-leave-one-out-cross-validation?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa – DaveR May 24 '18 at 10:05