Autor: |
Peskov, Denis, Barrow, Joe, Rodriguez, Pedro, Neubig, Graham, Boyd-Graber, Jordan |
Rok vydání: |
2019 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Natural language processing systems are often downstream of unreliable inputs: machine translation, optical character recognition, or speech recognition. For instance, virtual assistants can only answer your questions after understanding your speech. We investigate and mitigate the effects of noise from Automatic Speech Recognition systems on two factoid Question Answering (QA) tasks. Integrating confidences into the model and forced decoding of unknown words are empirically shown to improve the accuracy of downstream neural QA systems. We create and train models on a synthetic corpus of over 500,000 noisy sentences and evaluate on two human corpora from Quizbowl and Jeopardy! competitions. |
Databáze: |
arXiv |
Externí odkaz: |
|