A simplistic way of looking at it is from the probabilistic point of view like so: if $P(R|A) = 0.6$ and $P(R|B) = 0.6$, then $P(R|A \cap B) = 1 - 0.4^2 = 0.84$.
But, it is not that simple. We can't really consider predictors as some sort of random variable producing machines which for some reason correlate with the real outcome... This 60% is an average number of times when it did rain. It doesn't mean that it is always 60% of raining probability tomorrow if A is positive. It can capture more of some raining events than other (say, if previous day it was raining, A is always positive and it did rain next day in 90% cases as well).
If the two predictors are identical, then adding B will merely reproduce the same result and give 60% confidence.
On the other hand, if B is able to filter out all those events which A wrongfully predicted it will produce 100% confidence. Imagine that B predicted no rain in all those events when A wrongfully predicted rain. So, 40% of all events when A predicted rain and it was no rain, those B didn't predict.
Yet again, it is possible that when A and B predict rain at the same time, there is always no rain at all! Really, we have no information about how often they do predict rain and about the error of "no rain" event.
Moreover, it is quite possible that A and B never give positive prediction at the same time at all. I.e., imagine A is suited only for summer and by default <0 for other seasons, while B is for spring. In such case the answer is undefined since they simply will never predict rain at the same time at all.
All in all, in reality the union of the two could range from 0% to 100% or even be undefined.