Autor: |
Herbst, Alyssa, Huang, Bert |
Rok vydání: |
2019 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Annotating large unlabeled datasets can be a major bottleneck for machine learning applications. We introduce a scheme for inferring labels of unlabeled data at a fraction of the cost of labeling the entire dataset. Our scheme, bounded expectation of label assignment (BELA), greedily queries an oracle (or human labeler) and partitions a dataset to find data subsets that have mostly the same label. BELA can then infer labels by majority vote of the known labels in each subset. BELA determines whether to split or label from a subset by maximizing a lower bound on the expected number of correctly labeled examples. Our approach differs from existing hierarchical labeling schemes by using supervised models to partition the data, therefore avoiding reliance on unsupervised clustering methods that may not accurately group data by label. We design BELA with strategies to avoid bias that could be introduced through this adaptive partitioning. We evaluate BELA on three datasets and find that it outperforms existing strategies for adaptive labeling. |
Databáze: |
arXiv |
Externí odkaz: |
|