Zobrazeno 1 - 10
of 283
pro vyhledávání: '"KARBASI, AMIN"'
Autor:
Dong, Siyuan, Cai, Zhuotong, Hangel, Gilbert, Bogner, Wolfgang, Widhalm, Georg, Huang, Yaqing, Liang, Qinghao, You, Chenyu, Kumaragamage, Chathura, Fulbright, Robert K., Mahajan, Amit, Karbasi, Amin, Onofrey, John A., de Graaf, Robin A., Duncan, James S.
Publikováno v:
Medical Image Analysis (2024): 103358
Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to charact
Externí odkaz:
http://arxiv.org/abs/2410.19288
Finetuning foundation models for specific tasks is an emerging paradigm in modern machine learning. The efficacy of task-specific finetuning largely depends on the selection of appropriate training data. We present TSDS (Task-Specific Data Selection)
Externí odkaz:
http://arxiv.org/abs/2410.11303
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing tasks across various domains without needing explicit retraining. This capability, known as In-Context Learning (ICL), while impressive, exposes LLMs to a variety of
Externí odkaz:
http://arxiv.org/abs/2410.11272
Autor:
Zhang, Shiyang, Patel, Aakash, Rizvi, Syed A, Liu, Nianchen, He, Sizhuang, Karbasi, Amin, Zappala, Emanuele, van Dijk, David
We explore the emergence of intelligent behavior in artificial systems by investigating how the complexity of rule-based systems influences the capabilities of models trained to predict these rules. Our study focuses on elementary cellular automata (
Externí odkaz:
http://arxiv.org/abs/2410.02536
Autor:
Su, Ellen, Vellore, Anu, Chang, Amy, Mura, Raffaele, Nelson, Blaine, Kassianik, Paul, Karbasi, Amin
The widespread use of Large Language Models (LLMs) in society creates new information security challenges for developers, organizations, and end-users alike. LLMs are trained on large volumes of data, and their susceptibility to reveal the exact cont
Externí odkaz:
http://arxiv.org/abs/2409.12367
Autor:
Kalavasis, Alkis, Karbasi, Amin, Oikonomou, Argyris, Sotiraki, Katerina, Velegkas, Grigoris, Zampetakis, Manolis
As ML models become increasingly complex and integral to high-stakes domains such as finance and healthcare, they also become more susceptible to sophisticated adversarial attacks. We investigate the threat posed by undetectable backdoors, as defined
Externí odkaz:
http://arxiv.org/abs/2406.05660
We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei, Pitassi, and Sorrell [2022]. Motivated by a recent line of work that established strong statistical connections between replicability a
Externí odkaz:
http://arxiv.org/abs/2405.15599
We provide efficient replicable algorithms for the problem of learning large-margin halfspaces. Our results improve upon the algorithms provided by Impagliazzo, Lei, Pitassi, and Sorrell [STOC, 2022]. We design the first dimension-independent replica
Externí odkaz:
http://arxiv.org/abs/2402.13857
Despite the significant success of large language models (LLMs), their extensive memory requirements pose challenges for deploying them in long-context token generation. The substantial memory footprint of LLM decoders arises from the necessity to st
Externí odkaz:
http://arxiv.org/abs/2402.06082
Autor:
Mehrotra, Anay, Zampetakis, Manolis, Kassianik, Paul, Nelson, Blaine, Anderson, Hyrum, Singer, Yaron, Karbasi, Amin
While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed jailbreaks. In this work, we present Tree of Attacks with Pruning (T
Externí odkaz:
http://arxiv.org/abs/2312.02119