I tested an open source people counting sensor vs actual observed people counts. I only tested on 1 location for 10 days and I divided the location into 8 different zones. I was expecting the error rate which is calculated using this formula ((Csensor - Cobserved)/Cobserved) to be uniform in each zone and be between (-1, 1) at least. However I noticed that the sensor significantly undercounts (sensor data mostly 0) in all zones except one where it significantly overcounts (sensor overcounts by as much as 50x). Here is a link to a copy of the raw data for more info: Data
Anyway is there a statistic test that I can use to prove that the sensor is broken (sensor data isn't related to observed data)?