In the general case, when collecting data points at disparate points in time (such as once per day or two), under what conditions can a single low data point be considered a downwards trend in the data?
For instance, consider these data points:
Day Value
1 17.5
2 17.8
3 21.7 (treatment begins)
4 13.5 (treatment ends)
5 16.3
6 17.1
7 17.3
9 17.4
11 16.7
13 16.5
15 13.5
I am interested in the general sense of the question, not the specifics of this particular case. A single low data point could (in my opinion) be caused by any number of reasons, including an error in measurement. When is it considered acceptable to rely on a single low data point?
I am wary of detailing the specifics of this case as I would like to keep the question general, so the following is added only to give context and I have no objection to removing it from the question if the community deems so. The specifics of this case is the measurement of bilirubin in an infant's blood, as measured in mg/dL. When the level passes 20 mg/dL the infant is in danger of kernicterus, so any child whose bilirubin level approaches or exceeds 15 must have his bilirubin level monitored regularly, which is an inexpensive, routine blood test that returns results immediately. I am concerned that after only a single <15 mg/dL measurement, hospital procedure states that there is no longer need to take continued measurements. I'm not a doctor so I have little justification for trying to convince them to change their procedures, however their method of relying on a single data point seems to me suspect. Is this considered acceptable science, if not in the medical field, than in any other field? Under what conditions do statisticians accept a single data point as being indicative of a new trend in the data?