Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Levy, Mosh"'
The exponential growth of scientific literature necessitates advanced tools for effective knowledge exploration. We present Knowledge Navigator, a system designed to enhance exploratory search abilities by organizing and structuring the retrieved doc
Externí odkaz:
http://arxiv.org/abs/2408.15836
This paper explores the impact of extending input lengths on the capabilities of Large Language Models (LLMs). Despite LLMs advancements in recent times, their performance consistency across different input lengths is not well understood. We investig
Externí odkaz:
http://arxiv.org/abs/2402.14848
Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue
Externí odkaz:
http://arxiv.org/abs/2311.07389
Recent applications of LLMs in Machine Reading Comprehension (MRC) systems have shown impressive results, but the use of shortcuts, mechanisms triggered by features spuriously correlated to the true label, has emerged as a potential threat to their r
Externí odkaz:
http://arxiv.org/abs/2310.18360
Autor:
Shapira, Natalie, Levy, Mosh, Alavi, Seyed Hossein, Zhou, Xuhui, Choi, Yejin, Goldberg, Yoav, Sap, Maarten, Shwartz, Vered
The escalating debate on AI's capabilities warrants developing reliable metrics to assess machine "intelligence". Recently, many anecdotal examples were used to suggest that newer large language models (LLMs) like ChatGPT and GPT-4 exhibit Neural The
Externí odkaz:
http://arxiv.org/abs/2305.14763
Adversarial transferability in black-box scenarios presents a unique challenge: while attackers can employ surrogate models to craft adversarial examples, they lack assurance on whether these examples will successfully compromise the target model. Un
Externí odkaz:
http://arxiv.org/abs/2208.10878