Outlier Detection for Improved Data Quality and Diversity in Dialog Systems
Autor: | Jason Mars, Johann Hauswald, Parker Hill, Michael A. Laurenzano, Stefan Larson, Lingjia Tang, Jonathan K. Kummerfeld, Andrew Lee, Anish Mahendran |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Data collection Computer Science - Computation and Language Computer science 020206 networking & telecommunications 02 engineering and technology computer.software_genre ComputingMethodologies_PATTERNRECOGNITION Robustness (computer science) Data quality Outlier 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Anomaly detection Data mining Dialog box computer Computation and Language (cs.CL) |
Zdroj: | NAACL-HLT (1) |
Popis: | In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models. Accepted as long paper to NAACL 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |