If by pooling the data you mean simply collapsing the data set together, that is total wrong and is just a case of Simpson’s paradox. Unfortunately often what people mean by pooling refers to pooling estimates or effect sizes and that can be justified – so there is possibility for confusion.
The difficult judgement for something like this, is deciding whether a defensible likelihood can be defined that adequately reflects the commonalities and differences in this set of experiments or whether that is hopeless but still it is possible that all the experiments are assessing whether something has any effect or not (and possibly in the same direction).
The former sets up the logical basis for combining the p_values - if there is no effect (NULL is true) then the p_values are an independent sample of Uniform(0,1) and in principal a NULL distribution for any function of them is defined and a cut off value will give the desired combined type one error rate. Now if there was an effect (in any or all of the experiments in the same or differing direction) the resulting power of various choices of combination function will vary. For instance, if one suspects a common small effect in the same direction in all of the studies, a sum type function will have more power. If one suspects only a few actually had an effect, a minimum type function would be preferred.
For the later, obtaining a defensible likelihood, let me first simplify the outcome to simply present versus absent - proportions. Usual assumptions would be for two parameters, Pc and Pt in each experiment, so Pc1,Pc2,Pc3,Pc4 and Pt1,Pt2,Pt3,Pt4. Now it is always worthwhile to consider whether any of these should be common and assess this graphically and decide - no and do no pooling at all.
Now what might be considered common here? Surely not any of the Pc1,Pc2,Pc3,Pc4 but perhaps Pt1/Pc1,Pt2/Pc2,Pt3/Pc3,Pt4/Pc4 all equal R as common relative effects can be defensible (the parameters now are just 5 - R,Pc1,Pc2,Pc3,Pc4). More adventurously, you might instead think of these ratios differing haphazardly but coming from a common distribution. With that, the common thing is defined as the parameters in that distribution (aka a random effects distribution or mixing distribution). To get a sense of how to actual do something like this, either frequentist or Bayesian, you may wish to read O’Rourke and Altman. Statistics in Medicine 2005
Now you have survival data, so your less adventurous definition is still relative effects but the arbitrary control group parameters are now hazard functions which are more challenging than proportions.