Adding priors does not solve the identifiability problem
This is an case where the parameters are non-identifiable in your model. As you point out, contributions from the individual non-identifiable parameters in these ratios cannot be distinguished using the data.
When using Bayesian analysis with a non-identifiable model, specification of a prior for all the individual non-identifiable parameters will still lead you to a valid posterior, but this is strongly affected by the prior. The posterior for the non-identifiable parameters converges to a fixed asymptotic distribution that also depends heavily on the prior, so it lacks posterior consistency.
The fact that you get a valid posterior, and this converges to a fixed asymptotic distribution, often gives the misleading impression that Bayesian analysis renders the identifiability problem benign. However, it is crucial to note that the posterior in these cases is strongly affected by the prior in ways that do not vanish as we get more and more data. The identifiability problem is not rendered benign merely by using Bayesian analysis with priors.
Posterior depends heavily on prior: To see exactly what I mean, define the minimal sufficient parameters $\phi_1 \equiv \beta_1 / \beta_0$ and $\phi_2 \equiv \beta_2 / \beta_0$. These are the parameters that are identified in the present model. Using the rules for density transformation, the posterior distribution for the three non-identifiable parameters of interest can be written as:
$$\begin{equation} \begin{aligned}
\pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y})
&= \frac{1}{\beta_0^2} \cdot \pi(\beta_0, \phi_1, \phi_2 | \mathbf{x}, \mathbf{y}) \\[6pt]
&= \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1, \phi_2) \cdot \pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y}). \\[6pt]
\end{aligned} \end{equation}$$
Now, the posterior $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ for the minimal sufficient parameters (which are identifiable) is determined by the prior assumptions and the data as normal. However, the density $p(\beta_0 | \phi_1, \phi_2)$ is determined purely by the prior (i.e., it does not change as you get more data). This latter density is just an aspect of the assumed prior on the three non-identifiable parameters. Hence, the posterior of the non-identifiable parameters will be determined in large measure by a part that is purely a function of the prior.
Posterior converges for indentifiable parameters, not non-identifiable parameters: Bayesian asymptotic theory tells us that, under broad conditions, the posterior distribution of identifiable parameters converges towards a point-mass on the true values. (More specifically, there are a number of convergence results that show asymptotic convergence to a normal distribution with mean that approaches the true identifiable parameter values and variance that approaches zero.) In the context of regression there are some additional convergence conditions on the explanatory variables, but again, that convergence result holds broadly.
Under appropriate conditions, as $n \rightarrow \infty$ the density $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ will converge closer and closer to a point-mass distribution on the true values $(\phi_1^*, \phi_2^*)$. In the limit the posterior distribution for the non-identifiable parameters converges to a limiting distribution determined by the prior (that is not a point-mass):
$$\begin{equation} \begin{aligned}
\pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y})
&\rightarrow
\pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}_\infty, \mathbf{y}_\infty) \\[6pt]
&\propto \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1^* = \beta_1 / \beta_0, \phi_2^* = \beta_2 / \beta_0). \\[6pt]
\end{aligned} \end{equation}$$
We can see that this asymptotic density is affected by the data only through the true values of the minimal sufficient parameters. It is still heavily affected by the form of the density $p(\beta_0 | \phi_1, \phi_2)$, which is a function of the prior. Although the posterior for the identifiable parameters has converged to a point-mass on the true values, the posterior density for the non-identifiable parameters $\beta_0, \beta_1, \beta_2$ still retains uncertainty even in this limit. Its distribution is now entirely determined by the prior, conditional on holding the identifiable parameters fixed.
In the other answer by Björn you can see that he gives an excellent example of this phenomenon in the simple case of IID data from a normal distribution with a mean that is a ratio of two non-identifiable parameters. As you can see from his example, with a large amount of data there is posterior convergence for the identifiable mean, but the corresponding posterior for the non-identifiable parameters is still highly variable (and almost entirely dependent on the prior).
Conclusion: In Bayesian analysis you can assign a prior to a set of non-identifiable parameters and you get a valid posterior. However, despite the fact that we get a valid posterior, and asymptotic convergence of the posterior to a limiting distribution, all of those results are heavily affected by the prior, even with an infinite amount of data. In other words, don't let that fool you into thinking that you have "solved" the identifiabiity problem.