2

Suppose there is a device that can measure binary data (0 and 1) with 90 percent accuracy. I have used this device and measure the same data twice (or in another scenario three times). If in both measurements the device shows 0, what will be the accuracy of two (or three times) measurement in total (i.e. what is the probability that data is actually 0)?

Hear is a possible solution but I am not sure if it is correct (or if it has a name).

If measuring twice: $$A = 0.9 + (0.9*(1-0.9)) = 0.99$$ If measuring three times: $$B = A + (.9*(1-A)) = 0.999$$

user1436187
  • 523
  • 2
  • 4
  • 10

1 Answers1

1

Sorry, but your solution is incorrect.

One way you could get 90% accuracy would be if your underlying process yields 1 with a probability of 0.1 and 0 with a probability of 0.9, iid... and your device outputs 0 in all cases. You now have a (completely useless) device with 90% accuracy.

If you measure an observation multiple times, you will always get a 0, because that is your device's constant output.

But that doesn't change the fact that the probability it actually is 0 is still 0.9. No matter how often you measure.


This is related to the fact that accuracy is not the best measure for assessing classification results.

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357