Zobrazeno 1 - 10
of 5 231
pro vyhledávání: '"Mozes, A."'
The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies. However, this openness also brings significant security risks, including backdoor att
Externí odkaz:
http://arxiv.org/abs/2402.19334
The discovery of governing differential equations from data is an open frontier in machine learning. The sparse identification of nonlinear dynamics (SINDy) \citep{brunton_discovering_2016} framework enables data-driven discovery of interpretable mod
Externí odkaz:
http://arxiv.org/abs/2310.04832
Spurred by the recent rapid increase in the development and distribution of large language models (LLMs) across industry and academia, much recent work has drawn attention to safety- and security-related threats and vulnerabilities of LLMs, including
Externí odkaz:
http://arxiv.org/abs/2308.12833
Autor:
Kaddour, Jean, Harris, Joshua, Mozes, Maximilian, Bradley, Herbie, Raileanu, Roberta, McHardy, Robert
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify the remaining challenges and already fruitful application areas. In
Externí odkaz:
http://arxiv.org/abs/2307.10169
We show how to assign labels of size $\tilde O(1)$ to the vertices of a directed planar graph $G$, such that from the labels of any three vertices $s,t,f$ we can deduce in $\tilde O(1)$ time whether $t$ is reachable from $s$ in the graph $G\setminus
Externí odkaz:
http://arxiv.org/abs/2307.07222
The Voronoi diagrams technique was introduced by Cabello to compute the diameter of planar graphs in subquadratic time. We present novel applications of this technique in static, fault-tolerant, and partially-dynamic undirected unweighted planar grap
Externí odkaz:
http://arxiv.org/abs/2305.02946
Autor:
Griffin, Lewis D, Kleinberg, Bennett, Mozes, Maximilian, Mai, Kimberly T, Vau, Maria, Caldwell, Matthew, Marvor-Parker, Augustine
Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlie
Externí odkaz:
http://arxiv.org/abs/2303.06074
Autor:
A. V. Golomidov, O. G. Kryuchkova, E. V. Grigoriev, A. A. Chernykh, K. V. Lukashov, E. V. Maltseva, V. G. Mozes, K. A. Golomidov, K. B. Moses
Publikováno v:
Вестник анестезиологии и реаниматологии, Vol 21, Iss 4, Pp 78-84 (2024)
Introduction. Theoretical and practical issues of the short-term and long-term prediction of the onset of multiple organ dysfunction syndrome (MODS) and its outcomes in newborns is a promising area of neonatology, since it allows a doctor to be warne
Externí odkaz:
https://doaj.org/article/ce613af41ca04d07b4cb9df239c52cb3
Pretrained large language models (LLMs) are able to solve a wide variety of tasks through transfer learning. Various explainability methods have been developed to investigate their decision making process. TracIn (Pruthi et al., 2020) is one such gra
Externí odkaz:
http://arxiv.org/abs/2302.06598
Autor:
Mozes, Maximilian, Hoffmann, Jessica, Tomanek, Katrin, Kouate, Muhamed, Thain, Nithum, Yuan, Ann, Bolukbasi, Tolga, Dixon, Lucas
Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require diffe
Externí odkaz:
http://arxiv.org/abs/2302.06541