2

I am working on this problem in which I have a dataset of $n$-dimensional examples that come from different and unknown distributions. Given a new sample, I wish to find $k$ examples from the dataset that come from distribution(s) closest to the new sample. Which measure (Kullback-Leibler vs Hellinger Distance) might be more suitable for this and why?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
gamerx
  • 538
  • 2
  • 12
  • This could help: https://stats.stackexchange.com/questions/296361/intuition-of-the-bhattacharya-coefficient-and-the-bhattacharya-distance/296604#296604 – kjetil b halvorsen Aug 31 '17 at 19:50
  • 1
    Does this answer your question? [Differences between Bhattacharyya distance and KL divergence](https://stats.stackexchange.com/questions/130432/differences-between-bhattacharyya-distance-and-kl-divergence) – Learning stats by example Sep 29 '20 at 01:54

0 Answers0