Zobrazeno 1 - 10
of 3 950
pro vyhledávání: '"A. Zidi"'
Publikováno v:
Animal, Vol 13, Iss 8, Pp 1676-1689 (2019)
Hyperketonemia (HYK) is one of the most frequent and costly metabolic disorders in high-producing dairy cows and its diagnosis is based on β-hydroxybutyrate (BHB) concentration in blood. In the last 10 years, the number of papers that have dealt wit
Externí odkaz:
https://doaj.org/article/d5b51d8ff5364b47849739aaf0c381f6
Autor:
Xiang, Zhen, Zheng, Linzhi, Li, Yanjie, Hong, Junyuan, Li, Qinbin, Xie, Han, Zhang, Jiawei, Xiong, Zidi, Xie, Chulin, Yang, Carl, Song, Dawn, Li, Bo
The rapid advancement of large language models (LLMs) has catalyzed the deployment of LLM-powered agents across numerous applications, raising new concerns regarding their safety and trustworthiness. Existing methods for enhancing the safety of LLMs
Externí odkaz:
http://arxiv.org/abs/2406.09187
Autor:
Han, Jiayi, Cao, Zidi, Zheng, Weibo, Zhou, Xiangguo, He, Xiangjian, Zhang, Yuanfang, Wei, Daisen
In recent years, zero-shot learning has attracted the focus of many researchers, due to its flexibility and generality. Many approaches have been proposed to achieve the zero-shot classification of the point clouds for 3D object understanding, follow
Externí odkaz:
http://arxiv.org/abs/2404.19639
We present a novel subtraction method to remove the soft and collinear divergences at next-to-leading order for processes involving an arbitrary number of fragmentation functions, where this method acts directly in the hadronic centre-of-mass frame.
Externí odkaz:
http://arxiv.org/abs/2403.14574
As modern Large Language Models (LLMs) shatter many state-of-the-art benchmarks in a variety of domains, this paper investigates their behavior in the domains of ethics and fairness, focusing on protected group bias. We conduct a two-part study: firs
Externí odkaz:
http://arxiv.org/abs/2403.14727
Recent advancements in Large Language Models (LLMs) have showcased remarkable capabilities across various tasks in different domains. However, the emergence of biases and the potential for generating harmful content in LLMs, particularly under malici
Externí odkaz:
http://arxiv.org/abs/2403.13031
Autor:
Xiang, Zhen, Jiang, Fengqing, Xiong, Zidi, Ramasubramanian, Bhaskar, Poovendran, Radha, Li, Bo
Large language models (LLMs) are shown to benefit from chain-of-thought (COT) prompting, particularly when tackling tasks that require systematic reasoning processes. On the other hand, COT prompting also poses new vulnerabilities in the form of back
Externí odkaz:
http://arxiv.org/abs/2401.12242
Autor:
Sun, David Q., Abzaliev, Artem, Kotek, Hadas, Xiu, Zidi, Klein, Christopher, Williams, Jason D.
Controversy is a reflection of our zeitgeist, and an important aspect to any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Conseque
Externí odkaz:
http://arxiv.org/abs/2310.18130
Backdoor attack is a common threat to deep neural networks. During testing, samples embedded with a backdoor trigger will be misclassified as an adversarial target by a backdoored model, while samples without the backdoor trigger will be correctly cl
Externí odkaz:
http://arxiv.org/abs/2310.17498
Autor:
Abdelaziz Jaouadi, MD, Afef Ben Halima, MD, Oumaima Zidi, MD, Emna Bennour, MD, Ikram Kammoun, MD
Publikováno v:
Heart Rhythm O2, Vol 5, Iss 11, Pp 834-838 (2024)
Externí odkaz:
https://doaj.org/article/46b873e094594381a83bd155b9194645