Another common problem with OLS for a bounded variable is heteroscedasticity; as the expected value approaches the boundary, the variance generally shrinks (this is true of error distributions such as beta and binomial; it need not be true, e.g. if the error distribution simultaneously becomes much more skewed). Heteroscedasticity generally has more serious consequences (in terms of inefficiency, undercoverage, etc.) than violations of normality.
(An answer to the previous post comments "this isn't heteroscedasticity, it's truncation". They're right that the ultimate problem is truncation, but it does cause heteroscedasticity in your example (see the decreasing trend line in your scale-location plot).)
This may be too obvious to state, but naive model predictions for extreme values of the predictor variables would generally be biased (because the model would predict values outside the allowable range). You can see some evidence of such bias in the fitted vs residual plot in your original post, specifically the non-constant trend line.
More generally, if you do web searches for "linear probability model vs. logistic regression" you'll find lots of discussion, e.g. here. Econometricians generally prefer LPM, statisticians prefer logistic regression. If you search for "linear probability model" on CrossValidated, you'll mostly find statisticians telling you that LR is better (e.g. here). (This is a big rabbit hole which I have chosen not to go down, so I can't lay out the arguments in favour of the LPM for you.)