I understand that you have already done some multiple imputation, but there are some particular days for which there are no data at all. You might first consider being a bit more aggressive in the imputation if you think that values from these days are MAR without any relation to the values that would otherwise have been reported. For example, use values from other close-by days to help impute rather than just impute day by day, as I infer from your question was your approach.
If you already have done as much imputation as is reasonable, then you need to consider how you are going to use your results and the implications of the missing data for your use. I know of no particular reference on this, but the principles are pretty clear.
This "entropy" calculation is used to place different weights among indicators of water quality. The values entering the calculation aren't strictly probabilities, but they are non-negative values that sum to 1 so that an entropy-like calculation is possible. The idea is that indicators whose values seldom change much relative to their criteria of being out-of-acceptable ranges have high entropy by this calculation, little information, and thus should be weighted less than other indicators.
The particular values calculated for the entropy of any indicator will of course differ from the "true" value if you have missing data. You have to use your knowledge of the subject matter to determine if the difference is big enough to matter. If the data are MAR, then it seems that you will still end up finding the low-entropy/high-information indicators (those you presumably care the most about) in any event, although the relative weights may differ depending on missing values.
You also could calculate entropy for the complete data that you have, and simulate missing data by removing different days at random and recalculating, repeatedly. That should give some idea about how much missing days can matter in terms of this calculation.
Finally, note that the plug-in estimator used for Shannon entropy is biased. That leads to some question about this use of an entropy-like calculation in this weighting of indicators. Again, the important question is whether these issues are large enough to make a difference for your application.