With the interaction terms, the test of $\beta_1=\beta_2=0$ would be misleading in any event.
Yes, when you specify an interaction term of the form $x_1*R_1$, R does automatically expand that to include terms for both $x_1$ and $R_1$ individually. It is seldom a good idea to include an interaction term without terms for the individual predictors. See the discussion on this page.
Even if you did use a semantic trick to convince R to do what you want, the interpretation of $\beta_1$ and $\beta_2$ would depend on how all of the variables interacting with $x_1$ and $x_1^2$ are coded. With the interaction terms included in the model, $\beta_1$ for example would be the rate of change of outcome with respect to $x_1$ when all of its interacting continuous predictors have values of 0 and all of its interacting categorical predictors are at their reference values. If the interactions are significant, then the value of $\beta_1$ would change if, say, sex were one of the interacting predictors and you switched male for female as the reference, or if you decided to center an interacting continuous predictor. See this page among many others on this site.
So a test for $\beta_1=\beta_2=0$ doesn't tell you much directly about $x_1$ and $x_1^2$ individually if they are involved in interactions. You usually want to test whether a predictor along with some or all of its interactions is important to a model.