My question is related to the concept of differential privacy and deep learning. I found many papers to learn neural networks with differential privacy, but is it also possible to achieve differential privacy if you already have a non-private model and you want to add noise to the output, e.g. Laplace noise?
Are there some papers for doing so or is that a right approach? And how do you calculate the sensitivity of a neural network?