How good is your classifier? Revisiting the role of evaluation metrics in machine learning

1 347
11.2
Опубликовано 5 мая 2020, 21:47
With the increasing integration of machine learning into real systems, it is crucial that trained models are optimized to reflect real-world tradeoffs. Increasing interest in proper evaluation has led to a wide variety of metrics employed in practice, often specially designed by experts. However, modern training strategies have not kept up with the explosion of metrics, leaving practitioners to resort to heuristics.

To address this shortcoming, I will present a simple, yet consistent post-processing rule which improves the performance of trained binary, multilabel, and multioutput classifiers. Building on these results, I will propose a framework for metric elicitation, which addresses the broader question of how one might select an evaluation metric for real world problems so that it reflects true preferences.

See more at microsoft.com/en-us/research/v...
Случайные видео
10.01.22 – 3 3191:02
Oukitel Wp16 Unboxing
15.06.17 – 144 2228:47
Microsoft Surface Pro 2017 Review
автотехномузыкадетское