I've been reading about bayesians versus frequentists, including articles in this forum (like this one). Key is of course the issue of "priors". The bayesian critique being that frequentists do not consider priors. This xkcd meme takes the critique to the extreme, to make the point clear. A frequentist however can easily refute such critique by saying that priors can be incorporated in the model. I.e. let's make a better model (or equivalently, no frequentists would use such neutrino dectector machine in the first place).
I want to focus on another critique from bayesians to frequensists. Some bayesians say that a frequentists analysis only makes sense with experimental data (like coin tosing). Observational data (like GDP or unemployment) is clearly not drawn from a potentially repeateable experiment, and thus does not conform to the frequentist paradigm.
The problem is that bayesian analysis is also performed on observational data. As such, the critique avobe is also valid for bayesianism.
To put it differently, we know frequentists assume there exists a $P(\theta)$ and data is a draw from such distribution. But so does bayesians to the extent that their goal is to estimate an updated $P(\theta)$ using prior knowledge of $P(\theta)$. That is, ontologically speaking, both assume observational data comes from an underlying probabilistic data generation process, which distribution they try to estimate. GDP is a random variable and has a distribution, even if we see one realization. So both frequentists and bayesians suffer from this critique to observational data.
Is this the case? Is bayesian analysis on observational data subjet to the same critique than frequentist analysis? Am I misunderstanding the nature of the paradigm difference?