I got confused over a simple concept. Imagine that I have a binary classifier with 50% accuracy. So, if there are 10 samples to be classified as "y", "n", it has predicted 5 of them correctly.
Now, Imagine that I just guess the categories for each sample randomly (50% chance of getting it right for each one).
For the second one, it should be:
P(5 success) = 10!/(5!*5!) ((0.5)^5 * (0.5)^5) = 0.246
Are these two comparable?
This just came out of a comment from a friend who said "your model is just as good as a random guess", and I want to prove him wrong, but not sure how to put it together.