Scipy's implementation of the test return a value from 0 to .5,
If that's actually true, then it's not giving you a p-value.
And indeed, I've just gone and looked. It isn't giving a proper p-value -- though it says it is. It is instead doing something almost guaranteed to make some people do their tests wrong:
The reported p-value is for a one-sided hypothesis, to get the two-sided p-value multiply the returned p-value by 2.
Which one sided test is it doing? Well, if you get the two-sided p-value by doubling it, it's choosing the direction of the test based on the data. Which means you can neither do one-sided nor two sided tests using that number. It's not actually a p-value for any specific test*, but it claims it is.
* not for a two-sided alternative, nor a one-sided greater than alternative, nor a one-sided less than alternative. So it's not a p-value for any of the possible Mann-Whitney tests.
[This is, frankly, appalling. IMO that's really bad software design; it looks almost guaranteed to lead novice users to make mistakes.]
and if I understand correctly, .5 means that there is a 100% chance that the two datasets came from the same source. Is this correct?
Well, let's double it so we are clear we have a two-tailed p-value of 1.0
No, that's not what it would mean at all. A p-value is the probability of a test statistic at least as unusual as the one observed if the null hypothesis were true.
If it's high, it doesn't mean the null is exactly true. It just means you won't see a statistic more consistent with the null -- but that can be produced by situations where the null is false.
You may like to search our site for questions on interpreting p-values.
e.g. these ones may help:
Interpretation of p-value in hypothesis testing
is this interpretation of the p-value legit?
Interpreting p-values