The other thing to think about are the underlying assumptions when we talk about means and standard deviations, and correlations.
If we are talking about a data sample, one common assumption is that the data is (at least approximately) normally distributed, or can be transformed such that it is (e.g. via a log transform). If you observe a standard deviation of zero, there are two scenarios: either the standard deviation is in fact nonzero, but very small, and therefore the dataset you have has samples that are all on the mean value (this could, for example, happen if you are measuring data at a coarse level of precision); or the model is misspecified.
In this second scenario, the standard deviation, and consequently the correlation, is a meaningless measure.
More generally, the underlying distributions must both have finite second moments, and therefore non-zero standard deviations, for the correlation to be a valid concept.