Training Hybrid Language Models by Marginalizing over Segmentations
Autor: | Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin |
---|---|
Rok vydání: | 2019 |
Předmět: |
Sequence
business.industry Computer science String (computer science) Computer Science::Computation and Language (Computational Linguistics and Natural Language and Speech Processing) Pattern recognition 02 engineering and technology 010501 environmental sciences 01 natural sciences Character (mathematics) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Language model Artificial intelligence business 0105 earth and related environmental sciences |
Zdroj: | ACL (1) |
DOI: | 10.18653/v1/p19-1143 |
Popis: | In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thus, the probability of a string is the sum of the probabilities of all the possible segmentations. Here, we show how it is possible to marginalize over the segmentations efficiently, in order to compute the true probability of a sequence. We apply our technique on three datasets, comprising seven languages, showing improvements over a strong character level language model. |
Databáze: | OpenAIRE |
Externí odkaz: |