It depends on what your data looks like.
If you can see your data as a histogram (a list of tuples (data_point,frequency)
), and you're mostly interested in high frequencies, the easiest is probably to use a technique like this one to get a $(\varepsilon,\delta)$-differentially private version of your data. Note that if each user can contribute several data points, you have to limit the contribution of a single user (drop all records except the first $k$), and scale the noise by $k$. If you're unhappy with a $\delta>0$, and you can describe the space of possible data points (which might be large but bounded), you can use the method described in this paper.
If your data doesn't look like a histogram, you might want to use a first step of generalization. For example, if each data point is a precise GPS location, each is likely to be unique, but you can replace them by bounding boxes instead. If you want some kind of adaptive generalization, to return bounding boxes of different sizes depending on how many people are in it, this paper proposes different techniques.
Finally, there are also techniques for special kinds of data where you can use domain-specific knowledge to get more realistic and useful synthetic data. One example is location traces, with algorithms such as this one or this one.
If your use case doesn't fit any of the cases above, I'd encourage you to give a more detailed explanation of what kind of data you want to generate.