Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Taesun Whang"'
Publikováno v:
Applied Sciences, Vol 11, Iss 7, p 3009 (2021)
Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog his
Externí odkaz:
https://doaj.org/article/6eff0a83663e4ca9b8c37d5bbf44b9bd
Publikováno v:
Applied Sciences, Vol 11, Iss 5, p 1974 (2021)
Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a
Externí odkaz:
https://doaj.org/article/07915c1abb344684979ca753504ddd5d
Publikováno v:
Journal of KIISE. 48:154-159
Publikováno v:
Proceedings of the Second Workshop on Insights from Negative Results in NLP.
Publikováno v:
Proceedings of the Third Workshop on New Frontiers in Summarization.
Autor:
Dongyub Lee, Taesun Whang, Myeong Cheol Shin, Seungwoo Cho, Daniel Lee, Byeongil Ko, Jaechoon Jo, EungGyun Kim
Publikováno v:
COLING
Text summarization refers to the process that generates a shorter form of text from the source document preserving salient information. Many existing works for text summarization are generally evaluated by using recall-oriented understudy for gisting
Publikováno v:
INTERSPEECH
We focus on multi-turn response selection in a retrieval-based dialog system. In this paper, we utilize the powerful pre-trained language model Bi-directional Encoder Representations from Transformer (BERT) for a multi-turn dialog system and propose
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9796b62e83eb6216f127f60356c3665e
Publikováno v:
Applied Sciences, Vol 11, Iss 1974, p 1974 (2021)
Applied Sciences
Volume 11
Issue 5
Applied Sciences
Volume 11
Issue 5
Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a
Publikováno v:
Scopus-Elsevier
Existing works for aspect-based sentiment analysis (ABSA) have adopted a unified approach, which allows the interactive relations among subtasks. However, we observe that these methods tend to predict polarities based on the literal meaning of aspect
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7b5fb8a656f70d901b1309f7ce83f88e
http://www.scopus.com/inward/record.url?eid=2-s2.0-85121107725&partnerID=MN8TOARS
http://www.scopus.com/inward/record.url?eid=2-s2.0-85121107725&partnerID=MN8TOARS