Suppose a random variable $X$ has a distribution with support on $[0,1]$, ${\rm Prob}\{ X\in[0,1]\}=1$. I want to maximize its variance subject to the contraint that $\mathbb{E}[X]=\mu\in[0,1]$.
My gut feeling is that it will be the two-point distribution with ${\rm Prob}[X=0]=1-\mu, {\rm Prob}[X=1]=\mu$, but the formal proof of that must involve calculus of variations... and let's say it mildly that I am rusty on this.
I also think that this problem may have come up in design of experiments: if $X$ is the design variable for an experiment that needs to produce as precies estimates as possible of the regression line $Y=a+bX+{\rm error}$, then the variance of these estimates is $\sigma^2(X'X)^{-1}$, and my recollection of my DOX course is that the optimal design is the two-point one with support on the extremes of the range.