Zobrazeno 1 - 10
of 407
pro vyhledávání: '"Sha LEI"'
Autor:
WU Lingheng, CHEN Jianxiong, ZHANG Mengjiao, SHA Lei, CAO Mengmeng, SHEN Cuiqin, DU Lianfang, LI Zhaojun
Publikováno v:
Shanghai Jiaotong Daxue xuebao. Yixue ban, Vol 43, Iss 8, Pp 1024-1031 (2023)
Objective·To explore the relationship between poor blood glucose control and early impaired cardiac function in patients with type 2 diabetes mellitus (T2DM).Methods·Eighty-three patients diagnosed with T2DM in Jiading Branch of Shanghai General Ho
Externí odkaz:
https://doaj.org/article/4627a615d6394adb8be19d31cadcc7aa
Autor:
Ren, Qibing, Li, Hao, Liu, Dongrui, Xie, Zhanxu, Lu, Xiaoya, Qiao, Yu, Sha, Lei, Yan, Junchi, Ma, Lizhuang, Shao, Jing
This study exposes the safety vulnerabilities of Large Language Models (LLMs) in multi-turn interactions, where malicious users can obscure harmful intents across several queries. We introduce ActorAttack, a novel multi-turn attack method inspired by
Externí odkaz:
http://arxiv.org/abs/2410.10700
Autor:
Wang, Xinyuan, Huang, Victor Shea-Jay, Chen, Renmiao, Wang, Hao, Pan, Chengwei, Sha, Lei, Huang, Minlie
While large language models (LLMs) exhibit remarkable capabilities across various tasks, they encounter potential security risks such as jailbreak attacks, which exploit vulnerabilities to bypass security measures and generate harmful outputs. Existi
Externí odkaz:
http://arxiv.org/abs/2410.09804
Autor:
Gao, Bofei, Song, Feifan, Yang, Zhe, Cai, Zefan, Miao, Yibo, Dong, Qingxiu, Li, Lei, Ma, Chenghao, Chen, Liang, Xu, Runxin, Tang, Zhengyang, Wang, Benyou, Zan, Daoguang, Quan, Shanghaoran, Zhang, Ge, Sha, Lei, Zhang, Yichang, Ren, Xuancheng, Liu, Tianyu, Chang, Baobao
Recent advancements in large language models (LLMs) have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8%
Externí odkaz:
http://arxiv.org/abs/2410.07985
Autor:
Gao, Bofei, Song, Feifan, Miao, Yibo, Cai, Zefan, Yang, Zhe, Chen, Liang, Hu, Helan, Xu, Runxin, Dong, Qingxiu, Zheng, Ce, Quan, Shanghaoran, Xiao, Wen, Zhang, Ge, Zan, Daoguang, Lu, Keming, Yu, Bowen, Liu, Dayiheng, Cui, Zeyu, Yang, Jian, Sha, Lei, Wang, Houfeng, Sui, Zhifang, Wang, Peiyi, Liu, Tianyu, Chang, Baobao
Large Language Models (LLMs) exhibit remarkably powerful capabilities. One of the crucial factors to achieve success is aligning the LLM's output with human preferences. This alignment process often requires only a small amount of data to efficiently
Externí odkaz:
http://arxiv.org/abs/2409.02795
With the growing deployment of LLMs in daily applications like chatbots and content generation, efforts to ensure outputs align with human values and avoid harmful content have intensified. However, increasingly sophisticated jailbreak attacks threat
Externí odkaz:
http://arxiv.org/abs/2409.03788
Autor:
Fengjie Huang, Xiaojiao Zheng, Xiaohui Ma, Runqiu Jiang, Wangyi Zhou, Shuiping Zhou, Yunjing Zhang, Sha Lei, Shouli Wang, Junliang Kuang, Xiaolong Han, Meilin Wei, Yijun You, Mengci Li, Yitao Li, Dandan Liang, Jiajian Liu, Tianlu Chen, Chao Yan, Runmin Wei, Cynthia Rajani, Chengxing Shen, Guoxiang Xie, Zhaoxiang Bian, Houkai Li, Aihua Zhao, Wei Jia
Publikováno v:
Nature Communications, Vol 10, Iss 1, Pp 1-17 (2019)
Pu-erh tea displays cholesterol-lowering properties. Here, Huang et al. show that this is mostly due to the action of a pigment in Pu-erh tea that induces changes in certain gut microbiota and bile acid levels, thus modulating the gut-liver metabolic
Externí odkaz:
https://doaj.org/article/e98a12c3666947a7a7f653b46b316205
Large language models (LLMs) are proven to benefit a lot from retrieval-augmented generation (RAG) in alleviating hallucinations confronted with knowledge-intensive questions. RAG adopts information retrieval techniques to inject external knowledge f
Externí odkaz:
http://arxiv.org/abs/2405.18111
While Retrieval-Augmented Generation (RAG) plays a crucial role in the application of Large Language Models (LLMs), existing retrieval methods in knowledge-dense domains like law and medicine still suffer from a lack of multi-perspective views, which
Externí odkaz:
http://arxiv.org/abs/2404.12879
Large language models (LLMs) demonstrate substantial capabilities in solving math problems. However, they tend to produce hallucinations when given questions containing unreasonable errors. In this paper, we study the behavior of LLMs when faced with
Externí odkaz:
http://arxiv.org/abs/2403.19346