Consider a sequence of $n$ independent Bernoulli trials drawn from a list of biases $p_1,p_2,...,p_n\in[0,1]$, respectively. We set the random variable $X$ to be the sum of these trials. On wikipedia, the distribution of $X$ is called the Poisson binomial distribution. We define the sample mean and sample variance of our list of Bernoulli biases as $$ \bar{p}=\frac{1}{N}\sum_{i=1}^n p_i $$ and $$ \sigma_p^2 =\frac{1}{N}\sum_{i=1}^N(p_i-\bar{p}) =\frac{1}{N}\sum_{i=1}^N p_i^2 - \bar{p}^2. $$
Since the trials are independent, it is easy to compute that $$ \mathbb{E}[X] = \sum_{i=1}^n p_i = N\bar{p} $$ and \begin{align*} \mathbb{Var}[X] &= \sum_{i=1}^n p_i(1-p_i) \\ &= N\bar{p} - N(\sigma_p^2+\bar{p}^2) \\ &= N\bar{p}(1-\bar{p}) - N\sigma_p^2. \end{align*}
The expectation value of $X$ is not surprising. Also, when $\sigma_p^2=0$ we must have $\bar{p}=p_1=\cdots=p_n$ and so $X$ is binomially distributed, which matches $\mathbb{Var}[X]$ computed above.
My confusion is this: why does the variance of $X$ go down as the sample variance $\sigma_p^2$ goes up (with $\bar{p}$ and $N$ fixed)? I find this very counter-intuitive, and would appreciate an explanation. I would expect with a greater variance of biases, there would be a broader distribution of possible sums of the result...