In logistic regression, the binary cross-entropy (logistic loss function) is defined as $$\ell (\boldsymbol{y}, \boldsymbol{\hat{y}}) = - \sum_{i=1}^n y_i \log \hat{y}_i + (1-y_i) \log (1-\hat{y}_i).$$
I wonder why, the researchers do not report cross-entropy values computed on a test set in research papers. This can be a measure of the goodness of the estimator.
I would like to report cross-entropy, false-positive rate, false-negative rate and F-score (harmonic mean of precision and recall) computed from a test set.
Is there anything logically problematic in my case?