Updated Response: You still don't have a full specification for model #2. However, I can sort of guess what you mean -- correct me if I'm wrong. The trouble is that the statements $Y = \beta_1 X$ & $Y = \beta_0 + \beta_1 X$ are not probabilistic.
[ Aside: In a mathematical sense, you're defining a set of linear equations. If you have only one datapoint then you can solve $Y = \beta_1 X$ for the only value of $\beta_1$ which satisfies this equation. If you have more than one value of $Y$ and $X_1$ then this represents an overconstrained system -- without a solution. ]
My hunch is that you mean the following: $$Y \sim N( \beta_1 X_1, \sigma^2)$$ for some value of $\sigma^2$ which is assumed to be known -- (or perhaps you also want to estimate it). This places your random effects model within the context of a standard linear mixed effects model.
If we then assume that $Y$ is distributed as I have stated, then:
Answer: No, they are not equal. In the first model, $\beta_0$ is essentially an intercept (Imagine multiplying $\beta_0$ by $X_0$ where $X_0$ is always equal to 1). In the second model, $\beta_0$ represents a random offset from $\beta_1$. Note that this model (#2) wouldn't make sense to someone who doesn't do Bayes... It would be identical to running a linear regression where you have two predictors that are perfectly multi-collinear, but since you've made distributional assumptions within a Bayesian model, you can estimate it. That said, I'm not sure you should.
Note: You can run the first model without the added distributional assumption (which I included above more for model #2). However, I've never seen this sort of specification. I think it would be identical to stating that $(Y - \beta_1 X_1) \sim Gamma(6,3)$. In other words, an error term which is distributed as a gamma random variable. My hunch is that you instead want a random intercept model (with normally distributed errors). If that be the case, use model #1, don't use model #2. Yes $\beta_0$ and $\beta_1$ will exhibit autocorrelation until you standardize the levels of your categorical variable -- make sure that the summation across all individuals is equal to zero (which you can do by subtracting the mean of $X_1$ from every single observation of $X_1$).