I recently read: Is there any theoretical problem with averaging regression coefficients to build a model? and was intrigued as it brings a basic machine learning concept to good old fashioned OLS. The consensus from the accepted answer was that as OLS estimates are BLUE (best linear unbiased), the advantages one would stand to gain by averaging are few and would likely even underperform the normal OLS estimates. Also of note were the influence of outliers would be slightly less severe with the averaged approach. For this question, assume no outliers.
After thinking about this, I remained curious about the case for non-linear models. I originally intended to tag this as a comment, but I think it's too lengthy and the community would be better served if this was its own question. I wasn't sure if I should go with the average or median as there might be some evidence to suggest taking the median of OLS coefficients is warranted. So I will leave average and median in scope.
Question: Assuming no misspecification/functional form error, would averaging or taking the median of non-linear regression coefficients be theoretically undesirable? One thing I'm interested in is: unlike the OLS case, we are now optimizing with methods like maximum liklihood. Some particular regression models I had in mind are:
- ARMA/ARCH/GARCH
- LOGIT/PROBIT
- FAVAR
Answers don't need to explain for all the models above per se; they are just for reference.