I'm working on a dataset with response variable in [0,1] and n=61 observations, and trying to fit a model with betareg()
. After posting my former question, I started to question my analysis and decided to change my working process into a wider framework.
My new process is building all the models with all the possible combinations of 7 EVs for both mean and precision submodel, then excluding the models with more than 6 predictors based on the "10 to 1" rule of thumb. Since I have 2 categorical and one Interaction (categorical*continuous) EV, it leaves considerately less models to compare.
Then, I rank the models based on the lowest AIC, BIC and ICOMPifim (see Dünder and Cengiz, 2020), and I look at each criteria and at the overall rank (sum of all ranks). Still trying to figure out which criteria to go with since each of them represents a different approach.
However:
My concern is that by removing the models with more than 6 predictors, considering the predictors for mean and precision, I am missing out a lot. In a post which is dealing with LME model one of the answers suggests a power analysis. How can I do this for beta regression?
Would like to know if my model selection process is valid. According to this post, stepwise regression is highly not recommended. Although I'm not exactly doing stepwise, I am selecting models based on a certain criteria. I do not fully understand yet why it is so wrong, and what could be done instead, but would like to know in order to proceed in a reasonable manner. Since almost every paper I see compares AIC or other aspects of the model's fit, is it wrong to construct all possible models and then choosing the "best" one automatically?