An asymmetric measure of distance (or dissimilarity) between probability distributions. It might be interpreted as the expected value of the log likelihood ratio under the alternative hypothesis.
Kullback–Leibler divergence is an asymmetric measure of distance (or dissimilarity) between probability distributions. If $F(\cdot)$ and $G(\cdot)$ are the two distribution functions, with $G(\cdot)$ being absolutely continuous with respect to $F(\cdot)$ (i.e., has the support that is a subset of support of $F(\cdot)$), then KL divergence is
$$ D(F,G) = \int \ln\left( \frac{ {\rm d} F}{{\rm d}G}\right) {\rm d} F $$
For continuous distributions, interpret ${\rm d} F$ as the density, and for discrete distributions, as the point mass.
It is not a distance, in that $D(F,G) \neq D(G,F)$, yet it provides an important measure of how similar the two distributions are.
References