Let's say we have two means with a confidence intervals calculated, a±b and c±d, such that (a+b) < (d-c), e.g. the two confidence intervals are non-overlapping. How would I calculate a confidence interval for the difference between a and c?
I suppose the "proper approach" is the framework below:
Where sigma represents the standard deviations in each sample.
That said ... intuitively, if we already have confidence intervals it looks like we can use a "naive approach" and say the range of values for the delta between the samples is ((c+d)-(a-b), (c-d)-(a+b)). In other words, the smallest possible difference and largest possible difference, based on the confidence intervals.
Does the "naive approach" have valid statistical properties such that we could call it a confidence interval for the delta?