I have a dataset that contains ~7,500 blood tests from ~2,500 individuals. I'm trying to find out if variability in the blood tests increases or decreases with the time between two tests. For example - I draw your blood for the baseline test, then immediately draw a second sample. Six months later, I draw another sample. One might expect the difference between the baseline and the immediate repeat tests to be smaller than the difference between the baseline and the six-month test.
Each point on the plot below reflects the difference between two tests. X is the number of days between two tests; Y is the size of the difference between the two tests. As you can see, tests aren't evenly distributed along X - the study wasn't designed to address this question, really. Because the points are so heavily stacked at the mean, I've included 95% (blue) and 99% (red) quantile lines, based on 28-day windows. These are obviously pulled around by the more extreme points, but you get the idea.
alt text http://a.imageshack.us/img175/6595/diffsbydays.png
It looks to me like the variability is fairly stable. If anything, it's higher when the test is repeated within a short period - that's terribly counterintuitive. How can I address this in a systematic way, accounting for varying n at each time point (and some periods with no tests at all)? Your ideas are greatly appreciated.
Just for reference, this is the distribution of the number of days between test and retest:
alt text http://a.imageshack.us/img697/6572/testsateachtimepoint.png