I see you asking two different questions here. First is to test the null hypothesis that the five doctors' conclusions have similar variances. The second is to test if the five doctors' conclusions are concordant with a standard of truth (or in other words whether they correctly identify cancer. Ultimately, it seems you're interested in the latter, not the former.
Testing Variances
In this, I'm not sure if it would apply to your situation. You imply your doctors are giving a binary response: Cancer or No Cancer. I don't think that finding the variances of this binary response will tell you whether the doctors are correctly identifying it or not.
But to answer this question directly, the test for equal variances of two groups (in your case, 2 doctors) is the F-test, which calculates an F-statistic, where $F = s_1^2/s_2^2$, and is compared to the a critical value in the F distribution for your number of degrees of freedom to obtain statistical significance. To extend that to many groups (or many doctors), Bartlett's Test can be used, which tests the null hypothesis that all variances are equal; the alternative is when even 1 variance deviates from the others. However! These are very sensitive to the normal assumption, and binary responses are not normal. If the doctors give a response that can be reasonably assumed to be normal (say reporting tumor size), then these tests can be used. If you only have the binary response, this wouldn't be a good test. I put this here to directly answer your question about comparing variances.
Concordance
What I think you're more interested in testing whether the doctors are concordant with some standard of known truth (Cancer or Not). Sadly, oftentimes with cancer, the standard of truth is whether they died from said cancer or not. But whatever the source, you seem to have some known state of cancer or not. For this, the class of tests that are appropriate would be tests for Interrater Reliability. Specifically, I'd consider Cohen's Kappa, which is a pairwise comparison of each doctor to the standard of truth . What you would be doing is comparing each doctor against a standard of truth and deciding on concordant vs discordant conclusions. For example, consider three patients which is known:
{Cancer, Not, Cancer}.
Doctor 1 could say {Cancer, Cancer, Cancer}. This leads to the 2 concordant conclusions and 1 discordant conclusion.
Doctor 2 could say {Not, Cancer, Not}. This leads to 0 concordant conclusions and 3 discordant conclusions.
Doing this comparison for all 10000 cases creates a 2x2 table of concordance and discordance for the doctor against the standard of truth. Cohen's kappa then compares the observed probability of concordance to the expected probability of concordance to generate a kappa value that ranges from -1.0 (complete discordance) to 1.0 (complete concordance). 0.0 would be equivalent to randomly assigning numbers to both the doctor and the standard of truth to get random concordance. Finally, statistical significance usually isn't reported for kappa. What's most important here is the value of kappa, which is indicative of how well the doctors agree with the standard of truth. It doesn't take much to be significantly different from 0.0 (the usual null hypothesis), especially with 10000 samples. I would report the kappa value with confidence intervals for each doctor and an average of kappa for all 5.