As I understand it, you want to combine all the datasets together and get the percentiles for the entire dataset (single variable?), but it doesn't fit in memory.
I can't think of any way that one can "aggregate" percentiles so I would approach it as a technical problem.
Depending on the tech / software you're using, are there any tools to sort the variable in-place? Is it possible to increase the virtual memory?
If not, you may have to think of ways to manually hack your data into batches and preprocess it step by step, without loading everything into memory (I used to do this a lot for particle physics experiments with insanely huge datasets).
For example, do you have a lot of repeated values? Then you could think of a kind of compression - scan the datasets one by one and store the counts for each value you encounter (rather than the values themselves).
If that's not the case, you could estimate some "breakpoints" in your data, and start scanning the datasets and splitting them up and writing them into roughly "ordered" datasets. Once you have the rough datasets, you can do some more sorting & refining on that until you get something that's ordered and fits in memory nicely so you can get P1 to P5... then move onto the next part of the data, and so on.
Or, depending on how accurate your results need to be, you could make the data more coarse by reducing e.g. the decimal precision (making it more easy to count similar values), or some other sort of "binning" (like a histogram, though it would be a fine-tuning job to choose such a binning that can give you reasonably accurate percentiles).
It's a bit vague but that kind of thing (essentially, pre-processing steps to cut down your data volume while retaining the essential info) might help.