Zobrazeno 1 - 10
of 118
pro vyhledávání: '"Xia, Jiangnan"'
Publikováno v:
IEEE Transactions on Knowledge and Data Engineering, 2023
POI recommendation is practically important to facilitate various Location-Based Social Network services, and has attracted rising research attention recently. Existing works generally assume the available POI check-ins reported by users are the grou
Externí odkaz:
http://arxiv.org/abs/2311.00491
Autor:
Zeng, Yan, Zhang, Hanbo, Zheng, Jiani, Xia, Jiangnan, Wei, Guoqiang, Wei, Yang, Zhang, Yuchen, Kong, Tao
Recent advancements in Large Language Models (LLMs) such as GPT4 have displayed exceptional multi-modal capabilities in following open-ended instructions given images. However, the performance of these models heavily relies on design choices such as
Externí odkaz:
http://arxiv.org/abs/2307.02469
Publikováno v:
In Sensors and Actuators: B. Chemical 1 November 2024 418
Publikováno v:
In Materials Science in Semiconductor Processing 1 June 2024 175
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Zhang, Rui, Xia, Jiangnan, Ahmed, Ihab, Gough, Andrew, Armstrong, Ian, Upadhyay, Abhishek, Fu, Yalei, Enemali, Godwin, Lengden, Michael, Johnstone, Walter, Wright, Paul, Ozanyan, Krikor, Pourkashanian, Mohamed, McCann, Hugh, Liu, Chang
Publikováno v:
In Sensors and Actuators: B. Chemical 1 December 2023 396
Commonsense and background knowledge is required for a QA model to answer many nontrivial questions. Different from existing work on knowledge-aware QA, we focus on a more challenging task of leveraging external knowledge to generate answers in natur
Externí odkaz:
http://arxiv.org/abs/1909.02745
This paper focuses on how to take advantage of external relational knowledge to improve machine reading comprehension (MRC) with multi-task learning. Most of the traditional methods in MRC assume that the knowledge used to get the correct answer gene
Externí odkaz:
http://arxiv.org/abs/1908.04530
Recently, the pre-trained language model, BERT (and its robustly optimized version RoBERTa), has attracted a lot of attention in natural language understanding (NLU), and achieved state-of-the-art accuracy in various NLU tasks, such as sentiment clas
Externí odkaz:
http://arxiv.org/abs/1908.04577
Autor:
Qiu, Xincan, Liu, Yu, Xia, Jiangnan, Guo, Jing, Chen, Ping-An, Wei, Huan, Shi, Xiaosong, Chen, Chen, Zeng, Zebing, Chen, Huipeng, Jiang, Lang, Liao, Lei, Hu, Yuanyuan
Publikováno v:
In Cell Reports Physical Science 18 January 2023 4(1)