In the Bayesian theory of probability, probability is our expression of knowledge about a certain thing, not a property of that thing. However, I always see people treat $p$ as a parameter that needs to be estimated. They set up a prior for $p$, usually in the form of a beta function and then update it as "realizations" of this variable come in.
Even the great bayesian Jaynes sometimes gives the impression that he is "estimating the probabilities" or looking for the $p$ which best "fitst the data":
Now we wish to take into account only the hypotheses belonging to the ‘Bernoulli class’ $B_m$ in which there are $m$ possible results at each trial and the probabilities of the $A_k$ on successive repetitions of the experiment are considered independent and stationary;
Probability Theory, E. T. Jaynes, page 297
This makes me confused, because $p$ is not a probability, since it is a property of the random variable and it is not a frequency, since the variable represents a single event.