Is anything inherently random? Or is all randomness observed in data either "errors in measurement" or "lack of understanding"? Assume we could measure everything with infinite precision and had a complete understanding of all the deterministic processes. Would there be any process that would appear "random" in the data or would every dataset be deterministic at that point?

- 63,378
- 26
- 142
- 467

- 2,699
- 1
- 10
- 26
-
It depends on whether the universe is deterministic or stochastic. Which is what you want to distinguish. Would it be useful to ask what a thought experiment that would reveal the differences would look like? Or maybe you want to formulate a falsefiable hypothesis about how they would diverge? – ReneBt Dec 26 '21 at 13:42
-
@ReneBt: My question is inspired by this answer https://stats.stackexchange.com/a/558334/198058 – ColorStatistics Dec 26 '21 at 15:03
-
1Have you seen this thread https://stats.stackexchange.com/q/286853/35989 – Tim Dec 26 '21 at 16:03
-
2Moreover, it falls more into philosophy and is answered on philosophy Q&A https://philosophy.stackexchange.com/q/2439/27388 and https://philosophy.stackexchange.com/questions/29364/does-true-randomness-actually-exist – Tim Dec 26 '21 at 16:13
-
2Ah, in statistical analysis 'random' is often used as short hand for 'population, process and measurement effects that we don't understand but are orthogonal to the signal of interest and the disparate effects interact in such a way that the pattern of variance can be approximated by a stochastic variable'. It flow off the tongue a bit easier. As @Tim says, the question as asked is more philosophical. In a deterministic universe nothing is truely random. In a stochastic one, it is still likely that the randomness is buried a few layers deep in the contributing processes. – ReneBt Dec 26 '21 at 17:39
-
Related: [Are randomness and probability really logically dependent notions?](https://stats.stackexchange.com/questions/549914/are-randomness-and-probability-really-logically-dependent-notions). – DifferentialPleiometry Dec 26 '21 at 19:58
-
3Look first to what you mean by "anything." In what sense, for instance, is a repetition of an experiment really the *same* experiment? Such a philosophical investigation is needed before you can even talk about randomness. – whuber Dec 26 '21 at 20:55
-
@Tim: Thank you very much for the links. They were on point and really helpful. whuber: great point! BruceET in his answer talks about the difficulty of repeating the same experiment; it makes sense that this would be a key ingredient here. ReneBt: thank you; I am not sure I understand your sentence "in a deterministic universe nothing is truly random"... BruceET has provided an example of a truly random process and then there is this answer about another one https://philosophy.stackexchange.com/a/2443 Are you saying these are random only because we don't currently understand them? – ColorStatistics Dec 26 '21 at 21:34
-
Regarding measuring a deterministic universe, consider the difficulty of collecting everything to know about the universe in a small part of that same universe. – Dmitri Urbanowicz Dec 27 '21 at 09:06
-
radioactive decay is an example of inherently random process. you can measure its parameters with nearly infinite precision, but each event is random – Aksakal Dec 27 '21 at 14:37
-
@whuber [*"History doesn't repeat itself but it often rhymes"*](https://quoteinvestigator.com/2014/01/12/history-rhymes/). Our repeated experiments must be identified by equivalence relations often unequal to the identity relation. – DifferentialPleiometry Dec 27 '21 at 16:15
-
@2.7182818284590452353602874713 Sure--but that seems a little too glib to be useful. Exactly what aspects of an experiment must be reproduced in order for two instances to be considered "equivalent"? Philosophers of science, such as Mario Bunge, have focused on the role of this "experimental preparation" as crucial to understanding the meaning of physical law and of "randomness." – whuber Dec 27 '21 at 16:24
-
@whuber I doubt there is a general answer available to which exact equivalence class would hold for any experiment. In practice we use the equivalence classes that are identified *ad hoc* (e.g. treatment vs placebo). – DifferentialPleiometry Dec 27 '21 at 16:30
-
1https://plato.stanford.edu/entries/determinism-causal/ may be of interest – Adrian Feb 24 '22 at 03:54
4 Answers
In some ways this may be a philosophical question. My view is that the randomness a statistician sees in data analysis is often real. Randomness might result from random sampling or from inherent randomness or instability. Either way, statistical procedures of estimation might be useful.
Let's look at two entirely different situations.
(1) We wonder whether the true average height of women high school seniors in a US state is 65 inches. At noon on a particular day, one might carefully measure each of them (thousands or hundreds of thousands, depending on the state), round to the nearest tenth of an inch, and take the mean.
Even with perfectly accurate measuring and record keeping, the result would likely be different on another day. Some students might have grown just a bit. Also, there is some evidence that a person's height may depend (slightly) on how much sleep they got the previous night. Does it make sense to say that the true statewide average height fluctuates constantly in some random way?
More realistically, one might take a random sample of $n = 400$ women for the measurements and averaging. In this case, random sampling has clearly induced randomness. It would not be surprising if another random sample of $400$ resulted in a different mean. Using such a random sample we could make a 95% confidence interval for heights of senior women. Perhaps this CI would be about $0.7$ inches wide, but we could be fairly sure that the interval contains the right answer for the heights of senior women in the state. For practical purposes that might be sufficiently close to the 'correct' answer. Perhaps some probability model such as $\mathsf{Norm}(\mu = 65, \sigma=3.5)$ is approximately correct.
If so, then our two samples of size $400$ might (at random) have been $65.2$ and $65.0.$
round(rnorm(2, 65, 3.5/20) ,1)
[1] 65.2 65.0
(2) You have a small specimen of some radioactive material with a long half-life (several thousand years). You use an appropriate counter to count particles emitted in one particular minute to be $746.$ Using exactly the same experimental set-up several minutes later the count is $721.$
Monitoring the specimen over several hours, you believe the hourly rate of particle emission in your experiment is distributed $\mathsf{Pois}(\lambda=750).$ As far as is known, the number of particles emitted into our counter per minute is truly random, and $\mathsf{Pois}(750)$ is a reasonable model. Both of our one-minute measurements could be exactly correct, but it would be wrong to expect them to be exactly the same. (But not often fewer than 697 or more than 804.)
set.seed(1226); rpois(2, 750)
[1] 746 721
qpois(c(.025,.975), 750)
[1] 697 804

- 47,896
- 2
- 28
- 76
-
2+1: Thank you for this great answer, Bruce. One clarification. You state "Randomness might result from random sampling or from inherent randomness or instability." To which of these would you ascribe the random experiment of flipping of a coin? – ColorStatistics Dec 26 '21 at 21:53
-
2Ah yes! Right to the nub of it. [Diaconis et al.](https://statweb.stanford.edu/~cgates/PERSI/papers/dyn_coin_07.pdf) have shown that results of coins tossed by humans are remarkably non-random--partly or largely explainable by the physics of tossing. Experiments with carefully calibrated mechanical coin tossing machines have shown that essentially deterministic coin tossing is possible. Such physical analyses notwithstanding, if a coin tumbles many times before it is captured or 'hits ground', then results are largely unpredictable. So coin tosses to start ball games can be taken as "fair." – BruceET Dec 26 '21 at 22:39
-
Independently of the question of existence of randomness, you simply cannot measure anything to arbitrarily high precision, because your measurement will necessarily disturb it by a greater amount if you wish to have greater precision. This is already true in classical mechanics and remains true in quantum mechanics.
But your real question is whether there is anything inherently random in the world. Although this is not a mathematical question, it suffices to say that till today there is simply zero evidence for any truly random process. Quantum mechanics does not at all suggest, not to say imply, randomness in anything in the real world. Given this, it is reasonable to ask why so many real-world processes do appear to be probabilistic.
Well, the answer is that purely deterministic processes can amplify very small perturbations, so if there is an amplification process followed by a folding process then the result can appear chaotic even if it is fully determined by the initial state. Furthermore, symmetries in the folding process may strongly dictate the limiting distribution of the result as the amplification is increased.
For example, the logistic map has chaotic behaviour for certain parameters, for which just a slight perturbation in the input results in exponential divergence of the output. Essentially, the logistic map shows that repeated application of a continuous stretch-and-fold operation on a metric space can result in unpredictability in the limit. This has implications for the real-world, in that almost all measurements are of quantities arising from innumerable iterates of some physical process (e.g. particle-particle interactions).

- 212
- 8
-
1Thank you for the answer. It looks like you disagree with this post about a quantum mechanics process that is truly random. https://philosophy.stackexchange.com/a/2443 – ColorStatistics Dec 26 '21 at 21:22
-
@ColorStatistics: No, the author of that post makes it very clear that that process "does not appear to be (**locally**) deterministic" and "the decisive proof is only against **local** hidden variables". This is why there is so far no evidence against a deterministic universe, because QM is actually **non-locally deterministic**! – user21820 Dec 26 '21 at 21:30
-
You're mistaken about quantum physics. Consider the https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment – Neil G Dec 26 '21 at 21:33
-
2Neither is there evidence against a non-deterministic universe. One thing that quantum mechanics implies/suggests is that our current description of physical laws does *not* allow a deterministic description. The best description of our *experience/observation* of nature is currently describing this nature as non-deterministic. It will be like that as long as we do not find a testable scientific theory of hidden variables. Such theory would go beyond our current scope of observation and is at the moment only speculation. It is not for nothing that it is called '*hidden* variables theory'. – Sextus Empiricus Dec 27 '21 at 12:08
-
@ColorStatistics: My correct comments got removed. Other commenters are mistaken about QM. The best description we have so far involves **non-local** wave-functions that evolve **globally**. There is absolutely **zero** evidence of any such thing as "wave-function collapse"; it is indistinguishable from a purely fictional construct. This is totally obvious from the fact that the [Galton board](https://en.wikipedia.org/wiki/File:Planche_de_Galton.jpg) can be simulated by deterministic classical mechanics showing that the apparent randomness there simply is an **illusion**. – user21820 Dec 30 '21 at 14:16
Yes, there are "inherently" random processes in the universe. As far as I know, it's impossible to determine in advance the collapse of a wave function. For example, which slit a particle will go through in the two slit experiment.
This question should really be asked on physics.stackexchange.

- 2,274
- 1
- 11
- 27

- 13,633
- 3
- 41
- 84
-
Hidden variables theories are difficult to consider. One way in which they have not been strictly ruled out by [empirical tests of Bell's theorem](https://physics.aps.org/featured-article-pdf/10.1103/PhysRevLett.115.250401) is to allow [superdeterminsm](https://en.wikipedia.org/wiki/Superdeterminism) which is introduced and discussed in [*Rethinking Superdeterminsm*](https://www.frontiersin.org/articles/10.3389/fphy.2020.00139/full) and [*Superdeterminsm: A Guide for the Perplexed*](https://arxiv.org/abs/2010.01324). – DifferentialPleiometry Dec 26 '21 at 22:13
-
1For an introduction to superdeterminism at the popular science level, [watch this video](https://www.youtube.com/watch?v=ytyjgIyegDI). – DifferentialPleiometry Dec 26 '21 at 22:18
-
@2.7182818284590452353602874713 I find it enjoyable to consider superdeterminism alongside [the strong free will theorem](https://www.ams.org/notices/200902/rtx090200226p.pdf). Toodles! – Alexis Dec 27 '21 at 04:40
We do not know and may never know. (and possibly we can not know*)
Statistics is not about the nature of reality. Statistics describes observations and these observations happen to have a random appearance due to unknown variable factors that are involved in the model descriptions.
The question whether the reality is intrinsically non-deterministic is more like philosophy than statistics.
*One may argue that the question about the deterministic nature of the world is actually not something that can be answered by a consciousness within this reality, and it may even not have any meaning at all. If the physical processes occur with some random nature, that means that the future state of the world can not be determined based on the present state, then still something makes the current state evolve into the future state and it is in some way determined. It is only not determined by an entity that can be known by the objects in the observable world. Whether or not something is deterministic is a matter of perspective.

- 43,080
- 1
- 72
- 161