I think most people leave high school with a very limited understanding of basic statistical concepts. I think part of this is due to the intimidating looking formulas for things like standard error (many sigmas/indexes etc.).
I've always thought it would make much more sense to teach mean absolute deviation when introducing students to quantifying dispersion. It turns out that I don't think I'm alone, this education researcher seems to feel the same way:
http://sru.soc.surrey.ac.uk/SRU64.pdf http://dro.dur.ac.uk/12187/
This is easy enough, but it leads to the next obvious question: When a elementary school student is running a simple experiment and comparing means between groups, is there a simpler way the student can quantify the fact that a random sample would (not) likely show the difference they observed? The p-value would be too complicated to teach the concepts behind and be computationally opaque. I'd like to use Cohen's d, but I can't point to a standard set of effect sizes when using mean absolute deviation so the student can say their differences are "small" or "large".
The idea is that I want the student to see that, say, the differences between their groups may appear small or large; but that argument should be reinforced with a numerical procedure. Maybe something like comparing the difference between groups to the largest difference within a group? Does anyone have any good ideas?