Measuring Human Readability of Machine Generated Text: Three Case Studies in Speech Recognition and Machine Translation
Autor: | N. Granoien, M. Herzog, D.A. Reynolds, Douglas L. Jones, Clifford J. Weinstein, Edward Gibson, Wade Shen |
---|---|
Rok vydání: | 2006 |
Předmět: |
Machine translation
Arabic business.industry Computer science media_common.quotation_subject Speech recognition computer.software_genre language.human_language Psycholinguistics Readability Reading (process) ComputingMethodologies_DOCUMENTANDTEXTPROCESSING language Language proficiency Artificial intelligence business computer Natural language Natural language processing media_common |
Zdroj: | ICASSP (5) |
DOI: | 10.1109/icassp.2005.1416477 |
Popis: | We present highlights from three experiments that test the readability of current state-of-the art system output from: (1) an automated English speech-to-text (SST) system; (2) a text-based Arabic-to-English machine translation (MT) system; and (3) an audio-based Arabic-to-English MT process. We measure readability in terms of reaction time and passage comprehension in each case, applying standard psycholinguistic testing procedures and a modified version of the standard defense language proficiency test for Arabic called the DLPT*. We learned that: (1) subjects are slowed down by about 25% when reading system STT output; (2) text-based MT systems enable an English speaker to pass Arabic Level 2 on the DLPT*; and (3) audio-based MT systems do not enable English speakers to pass Arabic Level 2. We intend for these generic measures of readability to predict performance of more application-specific tasks. |
Databáze: | OpenAIRE |
Externí odkaz: |