Unsupervised language identification based on Latent Dirichlet Allocation
Autor: | Wen Li, Robert A. J. Clark, Wei Zhang, Yongyuan Wang |
---|---|
Rok vydání: | 2016 |
Předmět: |
Hierarchical Dirichlet process
Topic model Text corpus Language identification Computer science InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL 02 engineering and technology computer.software_genre Machine learning Latent Dirichlet allocation Theoretical Computer Science symbols.namesake 020204 information systems 0202 electrical engineering electronic engineering information engineering business.industry Human-Computer Interaction Constructed language Generative model ComputingMethodologies_PATTERNRECOGNITION symbols 020201 artificial intelligence & image processing Artificial intelligence business computer Software Natural language processing Gibbs sampling |
Zdroj: | Computer Speech & Language. 39:47-66 |
ISSN: | 0885-2308 |
DOI: | 10.1016/j.csl.2016.02.001 |
Popis: | Graphical abstract On the left generative model, we propose an unsupervised language identification approach based on Latent Dirichlet Allocation (LDA-LI) where we take the raw n-gram count as features without any smoothing, pruning or interpolation.Display OmittedAs the right experiments on ECI/MCI benchmark, the LDA-LI has comparable precisions, recalls and F scores to state of the art supervised language identification techniques (langID.py and Guess_language, etc.). HighlightsAn unsupervised language identification approach based on Latent Dirichlet Allocation with high precisions, recalls and F scores.The raw n-gram count as features without any smoothing, pruning or interpolation.Purifies main language with unknown number of other languages in high precision.Find out the nearest measure related to the minimum of topic number. To automatically build, from scratch, the language processing component for a speech synthesis system in a new language, a purified text corpora is needed where any words and phrases from other languages are clearly identified or excluded. When using found data and where there is no inherent linguistic knowledge of the language/languages contained in the data, identifying the pure data is a difficult problem. We propose an unsupervised language identification approach based on Latent Dirichlet Allocation where we take the raw n-gram count as features without any smoothing, pruning or interpolation. The Latent Dirichlet Allocation topic model is reformulated for the language identification task and Collapsed Gibbs Sampling is used to train an unsupervised language identification model. In order to find the number of languages present, we compared four kinds of measure and also the Hierarchical Dirichlet process on several configurations of the ECI/UCI benchmark. Experiments on the ECI/MCI data and a Wikipedia based Swahili corpus shows this LDA method, without any annotation, has comparable precisions, recalls and F-scores to state of the art supervised language identification techniques. |
Databáze: | OpenAIRE |
Externí odkaz: |