Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Hu, Jia Cheng"'
Image Captioning is an important Language and Vision task that finds application in a variety of contexts, ranging from healthcare to autonomous vehicles. As many real-world applications rely on devices with limited resources, much effort in the fiel
Externí odkaz:
http://arxiv.org/abs/2408.13963
Autoregressive Sequence-To-Sequence models are the foundation of many Deep Learning achievements in major research fields such as Vision and Natural Language Processing. Despite that, they still present significant limitations. For instance, when err
Externí odkaz:
http://arxiv.org/abs/2408.13959
Although the Transformer is currently the best-performing architecture in the homogeneous configuration (self-attention only) in Neural Machine Translation, many State-of-the-Art models in Natural Language Processing are made of a combination of diff
Externí odkaz:
http://arxiv.org/abs/2312.15872
The Image Captioning research field is currently compromised by the lack of transparency and awareness over the End-of-Sequence token () in the Self-Critical Sequence Training. If the token is omitted, a model can boost its performance up
Externí odkaz:
http://arxiv.org/abs/2305.12254
We introduce a method called the Expansion mechanism that processes the input unconstrained by the number of elements in the sequence. By doing so, the model can learn more effectively compared to traditional attention-based approaches. To support th
Externí odkaz:
http://arxiv.org/abs/2208.06551
Most recent state of the art architectures rely on combinations and variations of three approaches: convolutional, recurrent and self-attentive methods. Our work attempts in laying the basis for a new research direction for sequence modeling based up
Externí odkaz:
http://arxiv.org/abs/2207.03327
Autor:
ZENG, Yong, WANG, Cui, HE, Jian-feng, HUA, Zhong-bao, CHENG, Kai, WU, Xi-qing, SUN, Wei, WANG, Li, HU, Jia-cheng, TANG, Hong-hu *
Publikováno v:
In Transactions of Nonferrous Metals Society of China December 2023 33(12):3812-3824
Autor:
Chiu, Kuo Yuan, Govindan, Venkatesan, Lin, Ling-Chuan, Huang, Shin-Han, Hu, Jia-Cheng, Lee, Kun-Mu, Gavin Tsai, Hui-Hsu, Chang, Sheng-Hsiung, Wu, Chun-Guey
Publikováno v:
In Dyes and Pigments February 2016 125:27-35
Expansion methods explore the possibility of performance bottlenecks in the input length in Deep Learning methods. In this work, we introduce the Block Static Expansion which distributes and processes the input over a heterogeneous and arbitrarily bi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3d557d469aff21b35ca458ac8a24b947
http://arxiv.org/abs/2208.06551
http://arxiv.org/abs/2208.06551
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.