This is exactly the same thing as the case when the outcome is between 0 and 1, and that case is typically handled with a generalized linear model (GLM) like logistic regression. There are lots of excellent primers for logistic regression (and other GLMs) on the internet, and there is also a well-known book by Agresti on the topic.
Beta regression is a viable but more complicated alternative. Chances are logistic regression would work fine for your application, and would typically be easier to implement with most statistical software.
Why not use ordinary least squares regression? Actually people do, sometimes under the name "linear probability model" (LPM). The most obvious reason why LPMs are "bad" is that there's no easy way to constrain the outcome to lie within a certain range, and you can get predictions above 1 (or 100% or any other finite upper bound) and below 0 (or some other lower bound). For the same reason, predictions near the upper bound tend to be systematically too high, and predictions near the lower bound tend to be too low. The math underlying linear regression explicitly assumes that tendencies like this don't exist. There typically isn't a great reason to fit an LPM over logistic regression.
As an aside, it turns out that all OLS regression models, including LPMs, can be defined as a special kind of GLM, and in this context LPMs are related to logistic regression.