So one day after a tasty dinner full of bananas, an idea comes to your mind (you are person A) - "What if eating bananas can cure cancer?". Being a scientist at heart, you conduct a double blind study and your data shows you that bananas cure cancer with probability at least 99.99%. "WOW!" - you think, - "I am onto something important here!". You pack your stuff and hastily head towards the central organization for cancer research in your country. "People!", you exclaim entering the building, "I found the cure for cancer!".
They direct you to the head of the organization (person B). You show him your data, and he agrees that your conclusion based on your data is correct. "But, you see," - says the guy, - "this year we got at least 100,000 people like you coming to us and reporting their findings of curing cancer with various kinds of fruits. And we expect at least some of them to have data showing that they cure cancer with probability of 99.99% purely by chance. Therefore I am unconvinced, and you should conduct further research.". "So what?" - you say, - "I didn't conduct all those experiments, I only conducted one experiment, and it showed me that bananas cure cancer with probability 99.99%. Therefore I am bound to believe that bananas cure cancer with probability 99.99%. All the other studies used different fruits, and none of them used bananas, so they are irrelevant for this matter".
Assume that both person A and person B are absolutely rational and they completely trust each other. It is generally assumed that two rational agents given the same knowledge must come to the same conclusions. In this case though it appears to me that there is nothing that person B can tell person A that would lead person A to accept the person B's position. What will happen I think, is that person A will accept that person B should indeed hold the position that it is not clear whether bananas cure cancer or not. And person B will accept that person A is justified in believing that bananas cure cancer with probability 99.99%.
So this appears to be a paradox to me: from one side rational agents given same information must come to the same conclusion, and from the other hand we have here that rational agents have the same information but come to different conclusions. Do you see any resolution to this?