What analysis is talking about?
The script was written by Graham Moore based on a book by Andrew Hodges.
We have a dramatic (and possibly fictional) conversation that Turing may have had, or may have had one vaguely like, or one he may never have actually had, but which serves a dramatic purpose in conveying something that would be otherwise less easy or less watchable to convey. Guessing what was in the scriptwriter's mind may be difficult without asking him.
We might make guesses, but they may in the end be little more than that.
The following is clear enough:
If they act on every signal they intercept, it will very quickly become clear that the explanation for this is that the code can be broken, at which point all future value from being able to do it is lost.
if they never do anything with the knowledge, they will of course learn a lot, and will be unlikely to be discovered, but they can't get operational benefit from it.
if they make careful use of the information, acting only when it matters most, they may be able to reap a great deal of reward.
the more they use the information, the more "signal" they send that they can decode the messages.
In a very clear sense, then, there's a statistical problem for the people using Enigma (yet another mission has been lost - perhaps because the enemy was well-prepared; has our unbreakable cipher been broken?).
If it hasn't been broken, changing the code will be a difficult, expensive (and even somewhat risky, since a completely new, hastily constructed system may be vulnerable in unexpected ways compared to a tried and true secure) process -- you really want to avoid changing it if you don't need to. On the other hand, if it has been broken, you really need to use a new code.
So then the Enigma-user will be trying to pick up a weak and noisy signal from the Allied actions -- we might even frame it as an hypothesis test -- do they "know" more than can be explained by chance? (or as a decision-theory problem, by minimizing the loss function across the two different actions of change-the-code and keep-the-code in the presence of the data on allied actions)
In modern terms, game theory* would suggest there's an optimal mixed strategy for teh Allies - for any given level of benefit from using a particular piece of information, there's a corresponding probability with which you should use that information (realistically, there's some situations for which you may be able to fake a plausible mechanism by which you knew that didn't involve code breaking -- but if you start doing that, you also risk that deception being discovered -- even a few more people knowing about such a thing runs a great risk; this sort of 'the more people that know' risk is mentioned in the above exchange)
* (which Turing might or might not have had any formal background in but some of the ideas may well have been clear to him anyway)
So Turing is right - given that they can break the messages (at least pretty frequently), they'll have an important flow of very specific information, which will be of enormous benefit. But how should that information best be used in such a way as to avoid* conveying that the code can be broken? A probabilistic (/statistical) approach is actually a very smart strategy, because it allows to you make the signal as hard as possible to detect for a given level of use of the information.
Indeed, the more you "pick and choose" instead, the easier you make it to detect that you know, since whatever basis you use to choose might be exploited by someone occasionally feeding just the right false information into the communications system in the hopes of catching you (if it seems particular kinds of information tend to be acted on, you can focus on those ... and quickly reveal that they are). [By contrast a carefully tailored probabilistic approach makes every kind of response to such a tactic fairly noisy. So you would need to observe for a lot longer to really be confident that the code was broken.]
* 'avoid' in the sense that they can get as much total benefit from it as possible