4

Can we find the lowest attainable bound for the 100 m sprint times, i.e. the quickest it can be run ever, using the past data?

So every now and again the record gets broken and we can map the new record time. But surely there must be a particular time which can not be broken? For example, we know that no human will ever run 100 m in 5 seconds. So what would be the lower bound for these record times?

I feel that perhaps times taken to run the 100 m (by professional runners) follow a log normal distribution but I would think that the mean also decreases every now and again and perhaps the variance as well.

So perhaps a way to go about solving for this would be to find a value for the "mean" whereby the semi-variance (variance below this "mean") is zero.

Just a random question out of curiosity: any input or help would be appreciated.

Nick Cox
  • 48,377
  • 8
  • 110
  • 156
  • 4
    As I understand it there are about 100 years' worth of data over which the record has been lowered by about 1 s. Do you think that's enough data to extrapolate to what is possible "ever"? Humans evolve.... Otherwise put, what grounds have you for saying 5 s is not possible; do they carry over to 6 .. 7 .. 8 .. 9 s? – Nick Cox Oct 06 '14 at 13:39
  • 1
    A two-parameter lognormal has a density positive for all positive values. By your hypothesis, you need a three-parameter version, with a lower limit. – Nick Cox Oct 06 '14 at 13:43
  • Your second part of the question does not seem coherent with the first one : if you take a distribution that changes, e.g. a decreasing mean, it is likely your lower bond will decrease too. You can try to infer the speed of the mean decrease, or the lower bond at a given time, but probably not a lower bond ever. – Anthony Martin Oct 06 '14 at 14:11
  • I understand your points but I do not think we can consider evolution as a factor here because on the evolutionary scale the last 100years and next hundred years are hardly of significance. So if I had to rephrase my question to perhaps work out the lowest bound attainable for the next 100 years. – Matthew Smith Oct 06 '14 at 16:32
  • Tha Aerandal This is true but my thinking is more in terms that this decrease in the mean is asymptotic,so that yes the change in world record times will continue to decrease but the percentage of that decrease will get lower and lower and reach a cutoff point when measured to a predefined decimal point. I think it's worth saying that I'm assuming that the measurement of times remains constant at 2 decimal places and it is the new record time, at which the change in the 2nd decimal point of the time measured will no longer be affected and thus remain a constant, that I'm interested in. – Matthew Smith Oct 06 '14 at 16:39
  • And I think you make too many assumptions, evolution is not only human, it is also shoes, training techniques, drugs/dope... http://upload.wikimedia.org/wikipedia/commons/6/62/World_record_progression_100m_men.svg is the plot of the previous world record, it is not clear how the means is decreasing. Point is first try to model the current evolution, then you will try some prediction, and your dataset is not a sample from a simple distribution, which definitely complicates the maths – Anthony Martin Oct 06 '14 at 17:08
  • An important extra factor over the last century has been the widening of the pool of athletes who compete at the highest level. This is well known, but good luck in modelling it adequately. Who is say that all potential record-breakers are identified yet? – Nick Cox Oct 07 '14 at 16:10
  • FWIW, [a very simple model](http://condellpark.com/kd/sprintlogistic.htm) gives an asymptote of 9.48sec, to be reached in 500 years ;) – user603 Nov 11 '14 at 09:34

1 Answers1

-1

This may not be the best application of it, but could you use the Poisson Distribution? You could pick a time frame, like 10 years, and then determine the average drop in times. That would then give you a distribution that you could apply probabilities to and develop a forecast for your needs.

Unknown Coder
  • 166
  • 1
  • 1
  • 7
  • 1
    How could a distribution of *counts* be validly applied to *race durations*? – whuber Oct 07 '14 at 16:06
  • @whuber because it's not about the duration - it's about the **delta** of the duration :-) You can abstract the data out to apply Poisson. Be creative! – Unknown Coder Oct 07 '14 at 16:28
  • @JimBeam so bigger deltas are less likely? then why not exponential? – shadowtalker Oct 07 '14 at 16:32
  • 3
    Sorry, Jim: a difference of durations is still a duration. I appreciate the exhortation to be creative, but in this case the onus is on you to show *how* one would make this decidedly unusual application and why it ought to work. (Most efforts I have seen to shoehorn continuous data like these into a Poisson framework have been erroneous, which is why I am sceptical.) – whuber Oct 07 '14 at 16:33
  • @whuber well, the distribution of new records in a given time period would be Poisson. then you could use a spline or something to forecast the values of the records, and then have a really crude forecast about when the record will be broken, and by how much – shadowtalker Oct 07 '14 at 16:36
  • 1
    @ssde I doubt the distribution of new records would be Poisson and, even if it were, it's implausible that the Poisson parameter would remain constant over time--because the record is changing, the number of competitors is changing, and so on. And that tells you nothing about the "how much" issue, which is really the concern here. Regardless, this particular answer clearly suggests the Poisson distribution is intended to be applied to either "times" or "drop in times," but no basis for applying it to either has been provided. I would be delighted to see a successful example. – whuber Oct 07 '14 at 16:42