From the frequentist point of view, is it possible to incorporate prior information about a parameter into a probability model? Let me illustrate with an example: suppose you have a die with black and white faces but you do not know how many of each; you are then told that you can flip a coin for each face of the die and if it lands heads, then the face will be black, otherwise it will be white. Then, we are interested in finding out the number of times $X$ among $n$ die throws that the upper face will be black.
From the Bayesian point of view, it is clear that we can consider a binomial distribution for the prior for the probability of success $\theta$ (black face being upper). We then compute the likelihood of $\theta$ for some observed data $x$, then the posterior, and we are done. But from a frequentist point of view, how can we incorporate the information that we have about $\theta$? Do we just consider the fact that $\theta = 3/6$ is the most probable value and thus just use the probability model for $X$ evaluated at that given value of $\theta$; or is the incorporation of prior knowledge incompatible with frequentist assumptions? Does this mean that in cases like this, a frequentist is limited to using a point estimate for $\theta$? Do prior information/posterior distribution have a frequentist equivalent?
edit: I had already seen from the suggested duplicate post that you can incorporate prior information with Bayes rule, but it is still not entirely clear to me what this means: is this just a way of saying that you need to use a point estimate for the parameter according to what you learnt from the prior information?