Most of the recent famous methods coming out of the machine learning, are supervised learning methods like Decision Trees, Random Forests, Deep Learning, SVMs.
The more traditional supervised learning methods, like linear and logistic regression, with or without regularization, have had a long history of analysis of their nuances (eg assumptions for reliable use like normality, confidence intervals, hypothesis tests, optimal estimators).
Though the traditional stats models and the more modern ML ones come out of different disciplines (for statistics theoretically associated with mathematics departments and practically agronomy, medicine, social science, and econometrics, and for machine learning out of computer science with applications in vision, NLP, and AI), they have the same ends.
It seems like the ML models, as wildly successful as they seem, also seem to have very little theoretical support.
In contrast, linear regression can have a p-value analysis of each variable, F-test for the entire fit, has (the classic five assumptions). I've never seen such analysis of the more complicated ML tests.
There doesn't seem to be a treatment of machine learning models with the rigor of analysis of the statistical models. http://www.fharrell.com/post/stat-ml/
Is there any attempt to apply classic statistical analysis techniques to assessing the newer ML regression models?