Yes, the neurons are considered zero during backpropagation as well. Otherwise dropout wouldn't do anything! Remember that forward propagation during training is only used to set up the network for backpropagation, where the network is actually modified (as well as for tracking training error and such).
In general, it's important to account for anything that you're doing in the forward step in the backward step as well – otherwise you're computing a gradient of a different function than you're evaluating.
The way it's implemented in Caffe, for example, is (as can be verified from the source):
In forward propagation, inputs are set to zero with probability $p$, and otherwise scaled up by $\frac{1}{1 - p}$.
In backward propagation, gradients for the same dropped units are zeroed out; other gradients are scaled up by the same $\frac{1}{1-p}$.