Popis: |
Every day more technologies and services are backed by complex machine-learned models, consuming large amounts of data to provide a myriad of useful services. While users are willing to provide personal data to enable these services, their trust in and engagement with the systems could be improved by providing insight into how the machine learned decisions were made. Complex ML systems are highly effective but many of them are black boxes and give no insight into how they make the choices they make. Moreover, those that do often do so at the model-level rather than the instance-level. In this work we present a method for deriving explanations for instance-level decisions in tree ensembles. As this family of models accounts for a large portion of industrial machine learning, this work opens up the possibility for transparent models at scale. |