Do confidence intervals (CIs) also include information on effect size, in addition to statistical significance - enough to warrant not reporting each of those separately?
My understanding is that CIs replace the need for p-values, in that if a statistic is significantly different from 0 e.g. at the 0.05 level, then the 95% CI for that statistic will not contain 0.
But my question is: if the CI is provided and can be taken as an interval estimate of effect size, then does this eliminate the need to also specify the actual effect size, insofar as (correct me if I'm wrong) the latter is merely the point-estimate of the former?
And isn't this just a case of estimation statistics seeking to replace null hypothesis significance testing [*] ??
To illustrate this in the context of specific analyses: 1) For an ANOVA for which a partial-eta-square statistic is chosen as the measure of effect size, then would the corresponding CI have to be given for this particular effect-size statistic, or for the reported effect itself (e.g. main or interaction effect)? 2) For a correlation for which Pearson's R is chosen as the measure of effect size, and if CIs are provided for R, is there any point to still reporting the (point estimate) of R itself, as long as it can be assumed that the interval-estimate (the CI) is symmetrical about this point?
I might be mixing&confusing several key concepts here, and I appreciate the patience of any answers that will try and clarify those different concepts.
[*] Cumming, Geoff (2012). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge.