Explainable AI for Science and Medicine

48 557
13.2
Следующее
Популярные
Опубликовано 21 мая 2019, 19:20
Understanding why a machine learning model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. Here l will present a unified approach to explain the output of any machine learning model. It connects game theory with local explanations, uniting many previous methods. I will then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. These methods were motivated by specific problems we faced in medical machine learning, and they significantly improve doctor decision support during anesthesia. However, these explainable machine learning methods are not specific to medicine, and are now used by researchers across many domains. The associated open source software (github.com/slundberg/shap) supports many modern machine learning frameworks and is very widely used in industry (including at Microsoft).

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское