Zobrazeno 1 - 10
of 91
pro vyhledávání: '"Li, Yian"'
Autor:
Zhong, Pengzhi, Guo, Xiaoyu, Huang, Defeng, Peng, Xiaojun, Li, Yian, Zhao, Qijun, Li, Shuiwang
In recent years, the field of visual tracking has made significant progress with the application of large-scale training datasets. These datasets have supported the development of sophisticated algorithms, enhancing the accuracy and stability of visu
Externí odkaz:
http://arxiv.org/abs/2408.11463
Counterfactual reasoning, as a crucial manifestation of human intelligence, refers to making presuppositions based on established facts and extrapolating potential outcomes. Existing multimodal large language models (MLLMs) have exhibited impressive
Externí odkaz:
http://arxiv.org/abs/2404.12966
Anti-spoofing detection has become a necessity for face recognition systems due to the security threat posed by spoofing attacks. Despite great success in traditional attacks, most deep-learning-based methods perform poorly in 3D masks, which can hig
Externí odkaz:
http://arxiv.org/abs/2310.16569
Autor:
Qi, Wanhao, Liu, Bin, Li, Yian, Liu, Zhu, Rui, Shiqiao, Feng, Shuaipeng, Lu, Junya, Wang, Siling, Zhao, Qinfu
Publikováno v:
In Chemical Engineering Journal 1 May 2024 487
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units. This work introduces universal language representation learning, i.e., embeddings of d
Externí odkaz:
http://arxiv.org/abs/2105.14478
Although pre-trained contextualized language models such as BERT achieve significant performance on various downstream tasks, current language representation still only focuses on linguistic objective at a specific granularity, which may not applicab
Externí odkaz:
http://arxiv.org/abs/2012.14320
Autor:
Ye, Mengwei, Zhang, Weikang, Xu, Hongwei, Xie, Peiyu, Song, Luming, Sun, Xiaohan, Li, Yian, Wang, Siling, Zhao, Qinfu
Publikováno v:
In Journal of Colloid And Interface Science 15 January 2025 678 Part A:378-392
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of l
Externí odkaz:
http://arxiv.org/abs/2009.04656
Pre-trained contextualized language models such as BERT have shown great effectiveness in a wide range of downstream Natural Language Processing (NLP) tasks. However, the effective representations offered by the models target at each token inside a s
Externí odkaz:
http://arxiv.org/abs/2004.13947
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.