Is it possible to determine the mean value of a point by averaging the average rate of ranges that contain that point, and if so, how can the uncertainty of that value be accurately determined?
I think my question can be best asked with a hypothetical example. Let's say we have a train that runs on a 100km track several times a day. Before it leaves the station, two distances are randomly selected from the range (1,100); when the train reaches each of those points on the track, the conductor writes down the time. That is, if on one run the numbers selected are 20 and 30, the conductor will write down the time when the train passed the 20km marker and the 30km marker.
Now let's say after 1000 runs, every segment containing the 50km marker has an average speed of 150km/h. So maybe it took 4 minutes to go from 45km to 55km, 8 minutes to go from 48km to 68km, 2 minutes to go from 54km to 59km, 22 minutes to go from 0km to 55km, etc.
If we were measuring the exact speed as the train passed the 50km marker, then the mean speed as it crossed the 50km marker would be 150km/h with a standard deviation of 0. However, we now have 1000 measurements with a huge variation in precision.
Is it possible to measure the uncertainty of this mean value? What other factors could be used to determine the level of uncertainty?
I think an alternative way of asking this would be how does measurement uncertainty affect the mean and standard deviation? It would be reasonable to calculate a fairly accurate measurement of uncertainty, or at the very least the upper and lower bounds of it. For example, if we know the maximum possible speed of the train, and the maximum rate of acceleration and deceleration of the train, we could reasonable bound the measurement. For any increment, it would be possible to calculate a maximum speed and a minimum speed in that segment based on the maximum rates of acceleration and deceleration; i.e. if the conductor slammed the brakes at the instant of the first measurement and we knew the exact deceleration curve of the train, we could calculate the exact speed of the train at the 50km mark.
At least intuitively, it seems that if the recorded times occurred at 49.5km and 50.5km, then the train probably was going 150km/h as it crossed the 50km mark. However, if the recorded times were at the 1km marker and the 99km marker, anything could have happened in between those two points. How could this uncertainty be accurately determined and used to calculate the standard deviation of all measurements?