Zobrazeno 1 - 10
of 828
pro vyhledávání: '"P. Peyrard"'
Recent work demonstrated great promise in the idea of orchestrating collaborations between LLMs, human input, and various tools to address the inherent limitations of LLMs. We propose a novel perspective called semantic decoding, which frames these c
Externí odkaz:
http://arxiv.org/abs/2403.14562
Autor:
Amani, Mohammad Hossein, Baldwin, Nicolas Mario, Mansouri, Amin, Josifoski, Martin, Peyrard, Maxime, West, Robert
Traditional language models, adept at next-token prediction in text sequences, often struggle with transduction tasks between distinct symbolic systems, particularly when parallel data is scarce. Addressing this issue, we introduce \textit{symbolic a
Externí odkaz:
http://arxiv.org/abs/2402.10575
Autor:
Davidson, Tim R., Veselovsky, Veniamin, Josifoski, Martin, Peyrard, Maxime, Bosselut, Antoine, Kosinski, Michal, West, Robert
We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study mult
Externí odkaz:
http://arxiv.org/abs/2401.04536
Autor:
Monea, Giovanni, Peyrard, Maxime, Josifoski, Martin, Chaudhary, Vishrav, Eisner, Jason, Kıcıman, Emre, Palangi, Hamid, Patra, Barun, West, Robert
Large language models (LLMs) have an impressive ability to draw on novel information supplied in their context. Yet the mechanisms underlying this contextual grounding remain unknown, especially in situations where contextual information contradicts
Externí odkaz:
http://arxiv.org/abs/2312.02073
Autor:
Peyrard, Michel
Publikováno v:
Chaos 33, 103101 (2023)
We investigate the mechanisms behind the quasi-periodic outbursts on the Covid-19 epidemics. Data for France and Germany show that the patterns of outbursts exhibit a qualitative change in early 2022, which appears in a change in their average period
Externí odkaz:
http://arxiv.org/abs/2308.14090
Generative language models (LMs) have become omnipresent across data science. For a wide variety of tasks, inputs can be phrased as natural language prompts for an LM, from whose output the solution can then be extracted. LM performance has consisten
Externí odkaz:
http://arxiv.org/abs/2308.06077
Autor:
Josifoski, Martin, Klein, Lars, Peyrard, Maxime, Baldwin, Nicolas, Li, Yifei, Geng, Saibo, Schnitzler, Julian Paul, Yao, Yuxing, Wei, Jiheng, Paul, Debjit, West, Robert
Recent advances in artificial intelligence (AI) have produced highly capable and controllable systems. This creates unprecedented opportunities for structured reasoning as well as collaboration among multiple AI systems and humans. To fully realize t
Externí odkaz:
http://arxiv.org/abs/2308.01285
Despite their impressive performance, large language models (LMs) still struggle with reliably generating complex output structures when not finetuned to follow the required output format exactly. To address this issue, grammar-constrained decoding (
Externí odkaz:
http://arxiv.org/abs/2305.13971
Autor:
Paul, Debjit, Ismayilzada, Mete, Peyrard, Maxime, Borges, Beatriz, Bosselut, Antoine, West, Robert, Faltings, Boi
Language models (LMs) have recently shown remarkable performance on reasoning tasks by explicitly generating intermediate inferences, e.g., chain-of-thought prompting. However, these intermediate inference steps may be inappropriate deductions from t
Externí odkaz:
http://arxiv.org/abs/2304.01904
Large language models (LLMs) have great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by LLMs: for problems with structured outputs, it is possib
Externí odkaz:
http://arxiv.org/abs/2303.04132