Context: My problem relates to estimating effect sizes, such as Cohen's d, when looking at a subset of the population defined by a cut-off threshold. This effect size is the difference in two population means divided by the (assumed equal) population standard deviation.
Suppose there is a sample from a population with a variable $Y$ with "true" values $Y_{i0}$ that will be measured with error at two time points, $t_1$ and $t_2$, giving measurements $Y_{i1} = Y_{i0} + \epsilon_{i1}$, $Y_{i2} = Y_{i0} + \epsilon_{i2}$. At time $t_1$ we define a subset $J$ of the population by "$i \in J$ if $Y_{i1} > a$" for some fixed $a$. The objective is to estimate the variance of the subset at $t_2$, $V[Y_{2j}|j \in J]$ (or equivalently, the variance of $Y$ in the subset measured at any time other than $t_1$). We cannot use the subset's estimated variance at $t_1$ because the variance at $t_2$ will be larger.
Example code showing that the standard deviation of the subset at $t_2$ is greater than the standard deviation at $t_1$.
set.seed(1)
N <- 1000
Y0 <- rnorm(N,mean=0,sd=1)
Y1 <- Y0 + rnorm(N,mean=0,sd=0.5)
Y2 <- Y0 + rnorm(N,mean=0,sd=0.5)
indx <- Y1 > 1
sd(Y1[indx])
# [1] 0.6007802
sd(Y2[indx])
# [1] 0.8145581
Does this phenomenon, the variance of a thresholded subset increasing upon re-measurement, have a name? Can anyone share any references to help understand it either generally or in the specific context of effect sizes?