Some people aren't worth listening to: periodically retraining classifiers with feedback from a team of end users
Autor: | Lockhart, Joshua, Assefa, Samuel, Balch, Tucker, Veloso, Manuela |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Document classification is ubiquitous in a business setting, but often the end users of a classifier are engaged in an ongoing feedback-retrain loop with the team that maintain it. We consider this feedback-retrain loop from a multi-agent point of view, considering the end users as autonomous agents that provide feedback on the labelled data provided by the classifier. This allows us to examine the effect on the classifier's performance of unreliable end users who provide incorrect feedback. We demonstrate a classifier that can learn which users tend to be unreliable, filtering their feedback out of the loop, thus improving performance in subsequent iterations. Comment: Presented at the 2019 ICML Workshop on AI in Finance: Applications and Infrastructure for Multi-Agent Learning. Long Beach, CA |
Databáze: | arXiv |
Externí odkaz: |