How good is your classifier? Revisiting the role of evaluation metrics in machine learning

1 406
11.7
Опубликовано 5 мая 2020, 21:47
With the increasing integration of machine learning into real systems, it is crucial that trained models are optimized to reflect real-world tradeoffs. Increasing interest in proper evaluation has led to a wide variety of metrics employed in practice, often specially designed by experts. However, modern training strategies have not kept up with the explosion of metrics, leaving practitioners to resort to heuristics.

To address this shortcoming, I will present a simple, yet consistent post-processing rule which improves the performance of trained binary, multilabel, and multioutput classifiers. Building on these results, I will propose a framework for metric elicitation, which addresses the broader question of how one might select an evaluation metric for real world problems so that it reflects true preferences.

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское