I am dealing with a dataset in which there are 493 observations spanned over 30 predictors. My intention is to fit a model to make accurate predictions.
It seems to me that the ratio $\frac{n}{p}$ is relatively small to fit a regression model (correct me if I'm wrong about this); therefore, I am trying to fit a tree model (bagging, random forest, or boosting) to the dataset.
My question is does tree-based models also suffer from the stability-issues that result from a low $\frac{\text{Number of observations}}{\text{Number of predictors}}$ ratio as regression models do? Is this ratio an important factor in tree-based methods (assume it matters at all)? Why or why not?
Any suggestions about relevant reading/literatures would also be appreciated.