This is a concern which will very rarely every be realized. For a moderately sized neural network whose hidden layers each have $1000$ units, if the dropout probability is set to $p=0.5$ (the high end of what's typically used) then the probability of all $1000$ units being zero is $0.5^{1000} = 9.3\times10^{-302}$ which is a mind-bogglingly tiny value. Even for a very small neural network with only $50$ units in the hidden layer, the probability of all units being zero is $.5^{50}=8.9\times10^{-16}$, or less than $\frac{1}{1\ \text{thousand trillion}}$
So in short, this isn't something you ever need to worry about in most real-world situations, and in the rare instances where it does happen, you could simply rerun the dropout step to obtain a new set of dropped weights.
UPDATE:
Digging through the source code for TensorFlow, I found the implementation of dropout here. TensorFlow doesn't even bother accounting for the special case where all of the units are zero. If this happens to occur, then the output from that layer will simply be zero. The units don't "disappear" when dropped, they just take on the value zero, which from the perspective of the other layers in the network is perfectly fine. They can perform their subsequent operations on a vector of zeros just as well as on a vector of non-zero values.