3

There's some discussion on what F-measure means. I understand that the beta parameter determines the weight of recall in the combined score. In specific one answer states that "for good models using the $F_{\beta}$ implies you consider false negatives $\beta^2$ times more costly than false positives." beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall).

If you want to weight precision or recall higher than the other, how do you decide on the beta? I'm a bit unclear on the math behind the F-measure, so does a beta = .5 mean that precision is weighted 2x as much as recall?

Carl
  • 11,532
  • 7
  • 45
  • 102
skeller88
  • 249
  • 1
  • 8
  • 1
    From $\beta^2$, $\beta=0.5$ would suggest that precision would be weighted 4 times as much as recall, at least according to the one answer cited. – Carl Feb 29 '20 at 02:14

1 Answers1

3

Don't use F scores at all. Every criticism of accuracy collected at Why is accuracy not the best measure for assessing classification models? applies completely equally to precision, recall and all F scores. Instead, use proper scoring rules.

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357