Zobrazeno 1 - 10
of 11 641
pro vyhledávání: '"Seq2Seq"'
Autor:
Putra, Gregorius Guntur Sunardi, D'Layla, Adifa Widyadhani Chanda, Wahono, Dimas, Sarno, Riyanarto, Haryono, Agus Tri
Sign language translation is one of the important issues in communication between deaf and hearing people, as it expresses words through hand, body, and mouth movements. American Sign Language is one of the sign languages used, one of which is the al
Externí odkaz:
http://arxiv.org/abs/2409.10874
Autoregressive Sequence-To-Sequence models are the foundation of many Deep Learning achievements in major research fields such as Vision and Natural Language Processing. Despite that, they still present significant limitations. For instance, when err
Externí odkaz:
http://arxiv.org/abs/2408.13959
Autor:
Gao, Ge, Kim, Jongin, Paik, Sejin, Novozhilova, Ekaterina, Liu, Yi, Bonna, Sarah T., Betke, Margrit, Wijaya, Derry Tanti
Publikováno v:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) 5944-5955
Predicting emotions elicited by news headlines can be challenging as the task is largely influenced by the varying nature of people's interpretations and backgrounds. Previous works have explored classifying discrete emotions directly from news headl
Externí odkaz:
http://arxiv.org/abs/2407.10091
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Traditional deep learning methods struggle to simultaneously segment, recognize, and forecast human activities from sensor data. This limits their usefulness in many fields such as healthcare and assisted living, where real-time understanding of ongo
Externí odkaz:
http://arxiv.org/abs/2403.08214
Autor:
Liu, Yanming, Peng, Xinyue, Bo, Shi, Sang, Ningjing, Yan, Yafeng, Ke, Xiaolan, Zheng, Zhiting, Liu, Shaobo, Deng, Songhang, Cao, Jiannan, Dai, Le, Liu, Xingzu, Nong, Ruilin, Liu, Weihao
Large language models(LLMs) have shown its outperforming ability on various tasks and question answering. However, LLMs require substantial memory storage on low-resource devices. More critically, the computational speed on these devices is also seve
Externí odkaz:
http://arxiv.org/abs/2403.07088
Autor:
Zhou, Yuzhong1 (AUTHOR) diaoshif@163.com, Lin, Zhengping1 (AUTHOR), Wu, Zhengrong1 (AUTHOR), Zhang, Zifeng1 (AUTHOR)
Publikováno v:
Journal of Intelligent & Fuzzy Systems. 2024, Vol. 46 Issue 3, p6939-6950. 12p.
Autor:
Zhou, Houquan, Liu, Yumeng, Li, Zhenghua, Zhang, Min, Zhang, Bo, Li, Chen, Zhang, Ji, Huang, Fei
The sequence-to-sequence (Seq2Seq) approach has recently been widely used in grammatical error correction (GEC) and shows promising performance. However, the Seq2Seq GEC approach still suffers from two issues. First, a Seq2Seq GEC model can only be t
Externí odkaz:
http://arxiv.org/abs/2310.14534
Existing works on coreference resolution suggest that task-specific models are necessary to achieve state-of-the-art performance. In this work, we present compelling evidence that such models are not necessary. We finetune a pretrained seq2seq transf
Externí odkaz:
http://arxiv.org/abs/2310.13774