Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Shang, Chenming"'
Autor:
Zhang, Hengyuan, Shang, Chenming, Wang, Sizhe, Zhang, Dongdong, Sun, Renliang, Yu, Yiyao, Yang, Yujiu, Wei, Furu
Although fine-tuning Large Language Models (LLMs) with multilingual data can rapidly enhance the multilingual capabilities of LLMs, they still exhibit a performance gap between the dominant language (e.g., English) and non-dominant ones due to the im
Externí odkaz:
http://arxiv.org/abs/2410.19453
Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts and use the concepts to make predictions, enhancing the transparency of the decision-making process. Mult
Externí odkaz:
http://arxiv.org/abs/2404.08978
The multimodal deep neural networks, represented by CLIP, have generated rich downstream applications owing to their excellent performance, thus making understanding the decision-making process of CLIP an essential research topic. Due to the complex
Externí odkaz:
http://arxiv.org/abs/2404.08964
Knowledge tracing (KT) plays a crucial role in predicting students' future performance by analyzing their historical learning processes. Deep neural networks (DNNs) have shown great potential in solving the KT problem. However, there still exist some
Externí odkaz:
http://arxiv.org/abs/2403.07322
Knowledge tracing (KT) aims to estimate student's knowledge mastery based on their historical interactions. Recently, the deep learning based KT (DLKT) approaches have achieved impressive performance in the KT task. These DLKT models heavily rely on
Externí odkaz:
http://arxiv.org/abs/2403.06725
The standard definition generation task requires to automatically produce mono-lingual definitions (e.g., English definitions for English words), but ignores that the generated definitions may also consist of unfamiliar words for language learners. I
Externí odkaz:
http://arxiv.org/abs/2306.06058
Autor:
Zhang, Kai, Gu, Shuhang, Timofte, Radu, Shang, Taizhang, Dai, Qiuju, Zhu, Shengchen, Yang, Tong, Guo, Yandong, Jo, Younghyun, Yang, Sejong, Kim, Seon Joo, Zha, Lin, Jiang, Jiande, Gao, Xinbo, Lu, Wen, Liu, Jing, Yoon, Kwangjin, Jeon, Taegyun, Akita, Kazutoshi, Ooba, Takeru, Ukita, Norimichi, Luo, Zhipeng, Yao, Yuehan, Xu, Zhenyu, He, Dongliang, Wu, Wenhao, Ding, Yukang, Li, Chao, Li, Fu, Wen, Shilei, Li, Jianwei, Yang, Fuzhi, Yang, Huan, Fu, Jianlong, Kim, Byung-Hoon, Baek, JaeHyun, Ye, Jong Chul, Fan, Yuchen, Huang, Thomas S., Lee, Junyeop, Lee, Bokyeung, Min, Jungki, Kim, Gwantae, Lee, Kanghyu, Park, Jaihyun, Mykhailych, Mykola, Zhong, Haoyu, Shi, Yukai, Yang, Xiaojun, Yang, Zhijing, Lin, Liang, Zhao, Tongtong, Peng, Jinjia, Wang, Huibing, Jin, Zhi, Wu, Jiahao, Chen, Yifu, Shang, Chenming, Zhang, Huanrong, Min, Jeongki, S, Hrishikesh P, Puthussery, Densen, C V, Jiji
This paper reviews the NTIRE 2020 challenge on perceptual extreme super-resolution with focus on proposed solutions and results. The challenge task was to super-resolve an input image with a magnification factor 16 based on a set of prior examples of
Externí odkaz:
http://arxiv.org/abs/2005.01056
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.