0

The principle of indifference states that,

In the absence of any relevant evidence, agents should distribute their credence (or 'degrees of belief') equally among all the possible outcomes under consideration.

This means that two individuals with different amounts of "evidence" can come up with two different probabilities - that is, probabilities can be subjective. I can't think of a case where subjective probabilities might be useful in predicting the future. For example, say I have a biased coin that's biased towards heads but I do not know that it's biased and so I assign the probability of heads and tails to be both $\frac{1}{2}$. But this probability distribution will be useless in predicting the future as in the long run, I will end up with more heads than tails since the coin's biased towards heads. I was wondering what are some examples of when subjective probabilities might be useful?

sl2outnow
  • 3
  • 1
  • 1
    Prior implies a model based approach, so subjective probability can be used to test correctness of one's "degree of belief". In your example, one could figure out that initial prior may not converge. – msuzen Aug 17 '21 at 05:48

2 Answers2

3

To take your coin flipping example,

But this probability distribution will be useless in predicting the future as in the long run, I will end up with more heads than tails since the coin's biased towards heads.

This is missing the key point:

This means that two individuals with different amounts of "evidence" can come up with two different probabilities

Each time you observe the outcome of a coin flip you have more evidence about the bias of the coin than the "past you" that started from an uninformative prior, so of course the "current you" will have a different probability. For subjective Bayesianism, you can update your prior using Bayes rule to give the new prior for the next observation using Bayes rule.

Now before you observe the first coin flip it is not the case that you are completely ignorant of the bias of the coin. It is easy to make a two-headed coin or a two-tailed coin, but it is very difficult to make a coin that is biased, but still has heads and tails, whilst still being symmetrical enough not to be obviously biased. So a subjectivist Bayesian could construct a prior with a spikes at 0 and 1 for the probability of a head and the rest of the probability distributed fairly close to 0.5. In practice, this prior will give better predictions than a uniform prior, because it has more information about the problem. Of course as the number of flips observed grows large, the results from both initial priors will converge.

Another example where subjective probability is "useful" would be betting on horse races (betting was originally a strong motivation for research on probability). Horse races only occur once, so they don't really have long run frequencies, so we can't apply frequentist probabilities to the outcome of particular horse races. If we thought each horse was equally likely to win a-priori the bookies would make a very large profit. Instead, punters use their expertise to judge the subjective probability that a particular horse will win (based on its physiology, past record, the conditions etc.). The more expertise a punter has, the more likely they will win their bets. The bookies of course are likely to be very expert, and also have the evidence from the punter's bets that they can use, which is why the bookies will make a profit.

Dikran Marsupial
  • 46,962
  • 5
  • 121
  • 178
  • 1
    Thanks for the great answer. So when we use the principle of indifference to assign a probability distribution, can we think of this uniform probability distribution as the "starting point" - what you called the uninformative prior in your answer? And for the horse race example, you're basically saying that the past evidence that the expert has observed is what gives them an edge over assuming a uniform distribution...is that correct? – sl2outnow Aug 17 '21 at 09:38
  • yes, something like that. The point about the horse racing was also that the assessment of the evidence can be more "gut feeling" than any real probabilistic analysis of data, Bayes rule still gives us a way of updating our subjective beliefs. – Dikran Marsupial Aug 17 '21 at 09:45
  • 1
    @sl2outnow Also see https://stats.stackexchange.com/a/442587/77222 for more details on how you may go about this kind of Bayesian modelling. – Jarle Tufto Aug 17 '21 at 10:19
  • 3
    Read _Probability is Logic_ by ET Jaynes who convincingly portrays probability as information. Or IJ Good who gave an example where one poker player knew that a certain card was scuffed and the other players did not know this. The one player assigns different probabilities than the others. – Frank Harrell Aug 17 '21 at 12:23
  • I'd certainly second the recommendation to read Jayne's book (I'm reading "Good Thinking" at the moment but finding it a bit hard going, but it is probably more the style than content). – Dikran Marsupial Aug 17 '21 at 12:43
  • Thank you for the replies. This might be somewhat different than my original question but: does the subjective probability of an event have no implications about the long-term frequency of an event? If it doesn't, what does a subjective probability of, say 0.6, actually telling us if it's not the long-term frequency? – sl2outnow Aug 18 '21 at 03:12
  • @sl2outnow it is better to think of it as reflecting your personal state of knowledge about the event. If your state of knowledge is good, then your predictions are accurate. The important thing is that subjective Bayesianism provides a method for explicitly and unambiguously stating your prior knowledge, which is still present in frequentist statistics, but often "swept under the carpet" (c.f. IJ Good) – Dikran Marsupial Aug 18 '21 at 06:41
  • 1
    Got it. Thank you – sl2outnow Aug 18 '21 at 09:54
  • 1
    Presumably the horses in a given race were capable of running other races and so they have a track record of performance. This could be used to form long-run probability statements about the performance of a particular horse and therefore the outcome of a given race (or estimated long-run probability statements with a margin of error). – Geoffrey Johnson Aug 18 '21 at 14:14
  • Yes, indeed, however the races are not in the same conditions or against the same class of opponents and the horse has developed/aged, which means that it isn't really a long run frequency, so I think a Bayesian approach is less problematic? – Dikran Marsupial Aug 18 '21 at 15:21
  • Panel data? That's not problematic. It comes down to what you want to measure: the horse or the experimenter's opinion. – Geoffrey Johnson Aug 21 '21 at 10:54
  • For betting, it all comes down to experimenters opinion (subjective Bayes) because a frequentist cannot attach a non-trivial probability to a particular horse winning a particular race, which is what a rational probabilistic gambler must do. – Dikran Marsupial Aug 21 '21 at 11:40
  • A frequentist can most certainly estimate the probability of a particular horse winning a race, and can use interval estimates to quantify the uncertainty of around this estimation. The frequentist can also use predictive p-values and prediction intervals to predict a future experimental result. Horses winning races is no different from coins landing heads, cards being drawn, etc. If a Bayesian can inform a prior through a likelihood a frequentist can construct point and interval estimates and perform a meta-analysis. – Geoffrey Johnson Aug 21 '21 at 22:44
  • "A frequentist can most certainly estimate the probability of a particular horse winning a race" I note you have omitted the second "particular" there, which makes all the difference. If a frequentist says that the probability of a head next time you flip a coin is 0.5, they are violating the frequentist definition of a probability. What you would actually be doing is taking a subjectivist Bayesian step and using a frequentist long run frequency as justifying an equivalent Bayesian probability. This implicit moving between frameworks is what makes frequentist statistics difficult. – Dikran Marsupial Aug 22 '21 at 13:52
  • Even if I added a second "particular" we could still talk about frequentist probability. It would be a long-run statement, and while it would not apply to any particular event, this long-run behaviour would certainly give us confidence in the outcome of the next race. This does not make it Bayesian, though a Bayesian could certainly apply his paradigm as well. – Geoffrey Johnson Aug 23 '21 at 18:05
  • No, it wouldn't, that is the point. A particular race is only run once, it has no long run frequency, so it has no frequentist probability. "It would be a long-run statement, and while it would not apply to any particular event" you have just taken the second "particular" out again. There can only be Bayesian probabilities associated with the outcome of a particular race (if it is a choice between Bayesian and frequentist). – Dikran Marsupial Aug 24 '21 at 06:39
  • This is similar to the reason why a frequentist can't assign a probability that a statstic is in a *particular* confidence interval. https://stats.stackexchange.com/questions/26450/why-does-a-95-confidence-interval-ci-not-imply-a-95-chance-of-containing-the/26457#26457 Of course confidence intervals are often interpreted that way, and while it is usually benign, it can lead to error. – Dikran Marsupial Aug 24 '21 at 06:43
0

A subjective probability is useful for the experimenter to quantify his feelings. To the Bayesian probability is about updating his/her personal experience with the coin. Probability measures the experimenter, though it can appear as if it measures the parameter. To the frequentist probability statements must be falsifiable and so these statements only concern the long-run behavior of the coin and the experiment. Here are some threads that discuss these differences in approach and philosophy (1) (2) (3)

Geoffrey Johnson
  • 2,460
  • 3
  • 12
  • 3
    Re "There is no concern about being verifiable ...": That reads like a straw man argument. [Gelman *et al.*](http://www.stat.columbia.edu/~gelman/book/) strongly disagree. Or see S. James Press, *Subjective and Objective Bayesian Statistics,* 2nd Ed. – whuber Aug 18 '21 at 15:00
  • A posterior probability is not a verifiable statement about the actual parameter, the hypothesis, nor the experiment even if the unknown true parameter under investigation was indeed sampled from the prior distribution. Can you elaborate on "that reads like a straw man argument"? – Geoffrey Johnson Aug 18 '21 at 15:09
  • 3
    By attributing to "Bayesians" an opinion that few (if any) have, you are erecting a contrafactual proposition whose analysis does little to help understand the subject of this thread. – whuber Aug 18 '21 at 15:13
  • Hi whuber, the original post showed concern for subjective belief being used as evidence when predicting future events. It is not an opinion nor a contrafactual proposition for me or anyone else to say that belief is unfalsifiable. A strawman argument, however arbitrary, is falsifiable. I have amended my answer to state firstly how subjective probability is useful. Perhaps this is what you were looking for. – Geoffrey Johnson Aug 18 '21 at 15:37
  • 1
    You seem to conflate statements of the form "X is such-and-such" and "I believe X is such-and-such." – whuber Aug 18 '21 at 16:58
  • I am not following the such and such. I am suggesting the p-value itself, or any long-run probability, is falsifiable. A belief probability is not falsifiable because no one can claim to know the experimenter's belief better than the experimenter. No matter what the belief, it is always correct if it reflects the experimenter. No one can say a belief is right or wrong. Beliefs are not facts. – Geoffrey Johnson Aug 18 '21 at 18:01
  • 1
    That's right--but you continually write answers, like this one, that appear to conflate beliefs with facts! In particular, a subjective probability is *not* generally used to quantify "feelings." Your statement "probability measures the experimenter" could, if we replaced "probability" by a sensation to create a familiar analogy, be rendered "color measures the experimenter." Yes, perhaps the scientist sees the blue glow of Cherenkov radiation: but when *other people* and *other instruments* agree it's a blue glow, then denying the objectivity of "blue" amounts to solipsism. – whuber Aug 18 '21 at 18:17
  • You are conflating "color measures the experimenter" with "the experimenter measures color." I agree that if the experimenter measures a color, and other people and other instruments measure the same color, then we have mounting evidence for the color of Cherenkov radiation. "Color measures the experimenter" would amount to us saying "the experimenter is blue," but that does not help us with Cherenkov radiation. This is analogous to stating a posterior probability, even an objective one. It does not apply to the actual parameter, nor the experiment. – Geoffrey Johnson Aug 18 '21 at 18:50
  • I'm sorry, I cannot make sense of your "color measures the experimenter" phrase. When you get to "the experimenter is blue" it becomes clear that you are responding only to some caricature of what I wrote. That's unconstructive, so I will bow out of this thread. – whuber Aug 18 '21 at 19:20
  • @whuber, perhaps you have a deeper insight than I do. If so, I could sincerely use your help answering [this question](https://stats.stackexchange.com/q/539351/307000). – Geoffrey Johnson Aug 18 '21 at 20:58