I have a sequence of bytes. My null hypothesis is that each byte is drawn independently from a uniform distribution. What is the correct statistical tool for testing such a hypothesis?
My first thought was Pearson's chi-squared test. However... apparently that only works reliably when the expected frequencies are quite large. (Depending on who you ask, a minimum expected frequency of ten or more.) Given that each byte has 256 possible values, you're going to need a lot of bytes!
Wikipedia says something about trying Fisher's exact test instead... but only explains how to do that for a $2\times2$ table. I don't really understand how to apply it here.
Is a chi-squared test a good idea? Are there tools which will work with less data? What other approaches can I try?
I'm less interested in detecting perfect randomness, and more interested in detecting things which are plainly and obviously not random.
Note: I'm looking for an algorithm that a machine can perform. Answers such as "draw a graph and eyeball it" don't help me. Humans are already very good at seeing non-random patterns anyway; my difficulty is teaching a machine to do this automatically.