Zobrazeno 1 - 10
of 891
pro vyhledávání: '"Li Shiyang"'
Publikováno v:
Guoji Yanke Zazhi, Vol 24, Iss 11, Pp 1816-1820 (2024)
AIM: To investigate the early visual quality after 0.05 D interval spherical lens optometry-guided small incision lenticule extraction(SMILE)for the correction of different degrees of myopia.METHODS: Retrospective study. A total of 200 cases(200 eyes
Externí odkaz:
https://doaj.org/article/c613c4b935054d4098dce2d58ff805c6
Autor:
Wang, Kuan, Bukharin, Alexander, Jiang, Haoming, Yin, Qingyu, Wang, Zhengyang, Zhao, Tuo, Shang, Jingbo, Zhang, Chao, Yin, Bing, Li, Xian, Chen, Jianshu, Li, Shiyang
Instruction fine-tuning (IFT) elicits instruction following capabilities and steers the behavior of large language models (LLMs) via supervised learning. However, existing models trained on open-source IFT datasets only have the ability to follow ins
Externí odkaz:
http://arxiv.org/abs/2409.13733
Autor:
Cheng, Kewei, Yang, Jingfeng, Jiang, Haoming, Wang, Zhengyang, Huang, Binxuan, Li, Ruirui, Li, Shiyang, Li, Zheng, Gao, Yifan, Li, Xian, Yin, Bing, Sun, Yizhou
Reasoning encompasses two typical types: deductive reasoning and inductive reasoning. Despite extensive research into the reasoning capabilities of Large Language Models (LLMs), most studies have failed to rigorously differentiate between inductive a
Externí odkaz:
http://arxiv.org/abs/2408.00114
Autor:
Wang, Yu, Gao, Yifan, Chen, Xiusi, Jiang, Haoming, Li, Shiyang, Yang, Jingfeng, Yin, Qingyu, Li, Zheng, Li, Xian, Yin, Bing, Shang, Jingbo, McAuley, Julian
Existing Large Language Models (LLMs) usually remain static after deployment, which might make it hard to inject new knowledge into the model. We aim to build models containing a considerable portion of self-updatable parameters, enabling the model t
Externí odkaz:
http://arxiv.org/abs/2402.04624
Autor:
Bukharin, Alexander, Li, Shiyang, Wang, Zhengyang, Yang, Jingfeng, Yin, Bing, Li, Xian, Zhang, Chao, Zhao, Tuo, Jiang, Haoming
Recent works have shown that by curating high quality and diverse instruction tuning datasets, we can significantly improve instruction-following capabilities. However, creating such datasets is difficult and most works rely on manual curation or pro
Externí odkaz:
http://arxiv.org/abs/2311.14736
Autor:
Yan, Jun, Yadav, Vikas, Li, Shiyang, Chen, Lichang, Tang, Zheng, Wang, Hai, Srinivasan, Vijay, Ren, Xiang, Jin, Hongxia
Instruction-tuned Large Language Models (LLMs) have become a ubiquitous platform for open-ended applications due to their ability to modulate responses based on human instructions. The widespread use of LLMs holds significant potential for shaping pu
Externí odkaz:
http://arxiv.org/abs/2307.16888
While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that
Externí odkaz:
http://arxiv.org/abs/2307.10558
Autor:
Chen, Lichang, Li, Shiyang, Yan, Jun, Wang, Hai, Gunaratna, Kalpa, Yadav, Vikas, Tang, Zheng, Srinivasan, Vijay, Zhou, Tianyi, Huang, Heng, Jin, Hongxia
Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data. However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly contain many low-quality i
Externí odkaz:
http://arxiv.org/abs/2307.08701
Autor:
Li, Shiyang, Gao, Yifan, Jiang, Haoming, Yin, Qingyu, Li, Zheng, Yan, Xifeng, Zhang, Chao, Yin, Bing
Answering complex questions often requires reasoning over knowledge graphs (KGs). State-of-the-art methods often utilize entities in questions to retrieve local subgraphs, which are then fed into KG encoder, e.g. graph neural networks (GNNs), to mode
Externí odkaz:
http://arxiv.org/abs/2305.18742
Large language models (LLMs) demonstrate remarkable medical expertise, but data privacy concerns impede their direct use in healthcare environments. Although offering improved data privacy protection, domain-specific small language models (SLMs) ofte
Externí odkaz:
http://arxiv.org/abs/2305.12723