Zobrazeno 1 - 2
of 2
pro vyhledávání: '"Deng, Boyi"'
Retrieval-Augmented Generation (RAG) can alleviate hallucinations of Large Language Models (LLMs) by referencing external documents. However, the misinformation in external documents may mislead LLMs' generation. To address this issue, we explore the
Externí odkaz:
http://arxiv.org/abs/2406.11497
Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost
Externí odkaz:
http://arxiv.org/abs/2310.12505