I Use Python and the following definition of KL-Divergence
def kl_divergence(p, q):
return np.sum(np.where(p != 0, p * np.log(p / q), 0))
to calculate the divergence betweet two normal distributions:
x = np.linspace(norm.ppf(0.01, loc=0, scale=1), norm.ppf(0.99, loc=0, scale=1), 100)
a=norm.pdf(x, 0, 2)
b=norm.pdf(x, 2, 2)
kl_divergence(a, b)
The results depend on x and analytically, the result are wrong, because I used KL-Divergemnce for discrete distributions. I belief I could use these results for some practical purposes, but I need the real divergences. My question is, how can I implement KL-Divergence in python, such that it yields the analytically correct divergences? Does it work without integration to somehow transform the discrete results? If no, how can I integrate with numpy and scipy? I want to use it for the distribution that scipy has (normal, laplace,...) included.