There are some things that need clarification here.
Is $Y = \beta_1X + \beta_2R + \epsilon$ a structural equation? That is, do you believe the structural relationship between the variables you listed and the outcome is truly linear?
If this is the case, that is, if you truly believe the regression represents the structural equation of the model, then the answer is trivial --- if $R\perp \epsilon$ then you can identify $\beta_2$ regardless of the relationship between $X$ and $\epsilon$ (since you randomized $R$, we also have that $R \perp X$, assuming $X$ is not a collider or mediator --- more on that below).
However, if $Y = \beta_1X + \beta_2R + \epsilon$ is not a structural equation then things are more nuanced.
First you have to define what it is that you want to estimate, since $\beta_2$ is not a structural parameter per se. Usually you want to estimate the average treatment effect (ATE).
The first thing to keep in mind is that, since you performed an experiment, you can obtain the ATE by a simple difference in means, with no need to perform a regression.
Sometimes you want to control for other factors outside the experiment though, in order to reduce the variance of your estimate. When doing regression with experiments, you can stil get a consistent estimate of ATE, even if the true relationship is not linear.
But you have to keep some things in mind. As Freedman (2008) has shown, using a finite sample potential outcomes model:
Regression estimates are biased (though the bias gets small with
large samples);
The effect on asymptotic precision is not unambiguous: it may
improve or make it worse, depending mainly on balance between
treatment and control (if it’s not balanced, it depends on other
things which are hard or impossible to measure);
Usual (homoskedastic) estimated standard errors can overstate precision.
However, as Lin (2013) points out, with sufficiently large samples, these problems can be fixed. OLS adjustment cannot hurt asymptotic precision when a full set of treatment-covariate interactions is used. Also, asymptotic valid confidence intervals can be obtained using heteroskedastic consistent estimators.
Another big problem is that once you start including covariates and different specifications to your model, you are doing specification searches. As soon as the researcher try different sets of covarites looking for a “preferred” specification, the nominal type I error does not hold anynore. So if you are using frequentist statistics (like p-values) to make judgements about your data you have to keep this in mind.
In short, you can perform multiple regression adjustment to your experiment but: (i) make sure you have enough samples and include full sets of interactions; (ii) use the appropriate standard errors; (iii) always show your reader the unadjusted simple difference in means, which is the more "trustable" "hands of the table" estimate.
One final note, even if your $R$ is randomized, you should be careful about what variables you are controlling for., You should not control for colliders and you should not control for mediators if you're interested in the total effect, for instance.