I am interested in clarifying the distinctions between the two terms in the title to this query, the problem being that they can and have been used interchangeably in the literature.
Links to similar CV questions give a clear meaning to relative importance as a within model metric summarizing the magnitude or impact of a feature; this is distinct from but relatable to the statistical significance of that parameter. Also of note is the apparent fact that relative importance can be readily estimated within the context of statistical models but much less so, if at all, wrt many black box machine learning algorithms (NNs, RFs, xgboost, etc.).
Effect size can be used, on the one hand, wrt "the magnitude of the differences between groups" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/); as well as, on the other, for determining optimal sample sizes as a function of the magnitude of a desired effect (https://stats.stackexchange.com/questions/18028/desired-effect-size-vs-expected-effect-size?r=SearchResults&s=1|67.3751) If there are additional uses for effect size, I'm interested in learning about them.
Ok, so both concepts evaluate the magnitude of something but effect size has wider meaning and uses than relative importance.
What is a precise definition of effect size?