Zobrazeno 1 - 10
of 425
pro vyhledávání: '"Wu Xinwei"'
Publikováno v:
Zhongguo shipin weisheng zazhi, Vol 35, Iss 12, Pp 1710-1714 (2023)
ObjectiveTo investigate the molecular and biological characteristics of Vibrio vulnificus (V. vulnificus)isolated from Guangzhou.MethodsThirty-eight strains of V. vulnificus were collected from Guangzhou, and whole-genome sequences were obtaine
Externí odkaz:
https://doaj.org/article/ff78090fd8cc447f853b3ecddc1624d3
Autor:
ZHOU Yong, WU Xinwei, HU Yushan, WU Yejian, LIU Junhua, HOU Shuiping, ZHANG Xinqiang, ZHANG Jian
Publikováno v:
Zhongguo shipin weisheng zazhi, Vol 33, Iss 04, Pp 444-450 (2021)
ObjectiveTo investigate the prevalence, antimicrobial susceptibility, and enterotoxin gene of Staphylococcus aureus (S. aureus) isolates in ready-to-eat (RTE) foods in Guangzhou from 2008 to 2019.MethodsRTE food samples were randomly collected from r
Externí odkaz:
https://doaj.org/article/6cd95373bbe64d7692e72546fbcdb038
It is widely acknowledged that large language models (LLMs) encode a vast reservoir of knowledge after being trained on mass data. Recent studies disclose knowledge conflicts in LLM generation, wherein outdated or incorrect parametric knowledge (i.e.
Externí odkaz:
http://arxiv.org/abs/2406.18406
Ensuring large language models (LLM) behave consistently with human goals, values, and intentions is crucial for their safety but yet computationally expensive. To reduce the computational cost of alignment training of LLMs, especially for those with
Externí odkaz:
http://arxiv.org/abs/2405.13578
Prior research has revealed that certain abstract concepts are linearly represented as directions in the representation space of LLMs, predominantly centered around English. In this paper, we extend this investigation to a multilingual context, with
Externí odkaz:
http://arxiv.org/abs/2402.18120
Large language models pretrained on a huge amount of data capture rich knowledge and information in the training data. The ability of data memorization and regurgitation in pretrained language models, revealed in previous studies, brings the risk of
Externí odkaz:
http://arxiv.org/abs/2310.20138
Autor:
Shen, Tianhao, Jin, Renren, Huang, Yufei, Liu, Chuang, Dong, Weilong, Guo, Zishan, Wu, Xinwei, Liu, Yan, Xiong, Deyi
Recent years have witnessed remarkable progress made in large language models (LLMs). Such advancements, while garnering significant attention, have concurrently elicited various concerns. The potential of these models is undeniably vast; however, th
Externí odkaz:
http://arxiv.org/abs/2309.15025
Massively multi-task learning with large language models has recently made substantial progress on few-shot generalization. However, this is usually performed in a centralized learning fashion, ignoring the privacy sensitivity issue of (annotated) da
Externí odkaz:
http://arxiv.org/abs/2212.08354
Knowledge distillation (KD) has been widely used for model compression and knowledge transfer. Typically, a big teacher model trained on sufficient data transfers knowledge to a small student model. However, despite the success of KD, little effort h
Externí odkaz:
http://arxiv.org/abs/2212.08349
Publikováno v:
In International Journal of Mechanical Sciences 1 December 2024 283