I am currently thinking a lot about the definition of probability and one thing that I am really not content with at all is the frequentist definition of probability as long run relative frequencies of "the same" random experiment, with which I mean the execution of the experiment under similar conditions. It is virtually never specified what exactly counts as similar conditions. So I wonder how an adherent to the frequentist notion of probability would best describe what is meant with the repetition of an experiment under similar conditions.
EDIT: I am currently reading https://arxiv.org/abs/quant-ph/0408058 and it also picks up the issue I am concerned with:
It is clearly essential, in any serious experiment, to standardize the tossing procedure in such a way as to ensure that the probability of heads is constant. This raises the question: how can we be sure that we have standardized properly? And, more fundamentally: what does it mean to say that the probability is constant?"
and further:
Frequentists are impressed by the fact that we infer probabilities from frequencies observed in finite ensembles. What they overlook is the fact that we do not infer probabilities from just any ensemble, but only from certain very carefully selected ensembles in which the probabilities are, we suppose, constant (or, at any rate, varying in a specified manner). This means that statistical reasoning makes an essential appeal to the concept of a single-case probability: for you cannot say that the probability is the same on every trial if you do not accept that the probability is defined on every trial. The only question is whether the single-case probabilities are to be construed as objective realities (“propensities”), or whether they should be construed in an epistemic sense.