Zobrazeno 1 - 10
of 254
pro vyhledávání: '"Eiichiro Sumita"'
Publikováno v:
IEEE Access, Vol 10, Pp 92467-92480 (2022)
The generation of music lyrics by artificial intelligence (AI) is frequently modeled as a language-targeted sequence-to-sequence generation task. Formally, if we convert the melody into a word sequence, we can consider the lyrics generation task to b
Externí odkaz:
https://doaj.org/article/ef6b6b02aa924437889d74179c6c623f
Publikováno v:
IEEE Access, Vol 9, Pp 141571-141578 (2021)
Cross-lingual transfer is an important technique for low-resource language processing. Temporarily, most research on syntactic parsing works on the dependency structures. This work investigates cross-lingual parsing on another type of important synta
Externí odkaz:
https://doaj.org/article/6d88a42897eb4f50b2e3462f4fb5173d
Publikováno v:
IEEE Transactions on Artificial Intelligence. 3:518-525
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing. 30:330-339
Publikováno v:
Journal of Natural Language Processing. 29:748-753
Publikováno v:
Journal of Natural Language Processing. 29:1254-1271
Representation learning is the foundation of natural language processing (NLP). This work presents new methods to employ visual information as assistant signals to general NLP tasks. For each sentence, we first retrieve a flexible number of images ei
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::65bb971b0d08c37a6fc2646fa1359b59
http://arxiv.org/abs/2301.03344
http://arxiv.org/abs/2301.03344
Autor:
Hour Kaing, Sethserey Sam, Katsuhito Sudoh, Chenchen Ding, Eiichiro Sumita, Satoshi Nakamura, Masao Utiyama, Sopheap Seng
Publikováno v:
ACM Transactions on Asian and Low-Resource Language Information Processing. 20:1-16
As a highly analytic language, Khmer has considerable ambiguities in tokenization and part-of-speech (POS) tagging processing. This topic is investigated in this study. Specifically, a 20,000-sentence Khmer corpus with manual tokenization and POS-tag
Publikováno v:
Neurocomputing. 451:46-56
In self-attention networks (SANs), positional embeddings are used to model order dependencies between words in the input sentence and are added with word embeddings to gain an input representation, which enables the SAN-based neural model to perform
Publikováno v:
2022 25th Conference of the Oriental COCOSDA International Committee for the Co-ordination and Standardisation of Speech Databases and Assessment Techniques (O-COCOSDA).