Zobrazeno 1 - 10
of 9 902
pro vyhledávání: '"Peyre"'
Autor:
Olive Marie-Marie, Angot Jean-Luc, Binot Aurélie, Desclaux Alice, Dombreval Loïc, Lefrançois Thierry, Lury Antoine, Paul Mathilde, Peyre Marisa, Simard Frédéric, Weinbach Jérôme, Roger François
Publikováno v:
Natures Sciences Sociétés, Vol 30, Iss 1, Pp 72-81 (2022)
En mars 2021, Montpellier Université d’excellence (MUSE) et Agropolis International ont réuni des décideurs, acteurs opérationnels, représentants d’organisations internationales et scientifiques pour partager leurs expériences des approches
Externí odkaz:
https://doaj.org/article/75bc06a6250e4a3e832c43c8f9143790
Autor:
Nicolas Kerckhove, Noémie Delage, Célian Bertin, Emmanuelle Kuhn, Nathalie Cantagrel, Caroline Vigneau, Jessica Delorme, Céline Lambert, Bruno Pereira, Chouki Chenaf, Nicolas Authier, Poma Network, Debbah Abdelouahab, Peyre Alexandre, Simon Anna, Defeuillet Catherine, Wiart Catherine, Sureau Christophe, Vulser Cristofini Claire, Bouhassira Didier, Touchard Emmanuelle, Collin Elisabeth, Serra Eric, Perez-Varlan Evelyne, Mohy Frédérique, Peyriere Hélène, Le Borgne Jean-Marie, Poinsignon Jean Paul, Micallef Joëlle, Dy Lénaïg, Amilhaud Marlène, Venard Maria, Dorsner-Binard Marie, Berrier Oui Marie, Martial Maud, Feuillet Maryline, De Rijk Pablo, Ginies Patrick, Kieffert Patrick, Giraud Pierric, Aerts Raluca, Le Boisselier Reynald, Cauchin Sonia, Pouplin Sophie, Corand Virginie, Perier Yannick, Poujol Yves
Publikováno v:
Frontiers in Pharmacology, Vol 13 (2022)
Public health issues related to chronic pain management and the risks of opioid misuse and abuse remain a challenge for practitioners. Data on the prevalence of disorders related to the use of prescribed opioids in patients suffering from chronic pai
Externí odkaz:
https://doaj.org/article/e885e6d544a7417d915c608abbf9b87f
Autor:
Sander, Michael E., Peyré, Gabriel
Causal Transformers are trained to predict the next token for a given context. While it is widely accepted that self-attention is crucial for encoding the causal structure of sequences, the precise underlying mechanism behind this in-context autoregr
Externí odkaz:
http://arxiv.org/abs/2410.03011
Transformers are deep architectures that define "in-context mappings" which enable predicting new tokens based on a given set of tokens (such as a prompt in NLP applications or a set of patches for a vision transformer). In this work, we study in par
Externí odkaz:
http://arxiv.org/abs/2408.01367
Autor:
Brion, Michel, Peyre, Emmanuel
Publikováno v:
Comptes Rendus. Mathématique, Vol 358, Iss 6, Pp 713-719 (2020)
We say that a smooth algebraic group $G$ over a field $k$ is very special if for any field extension $K/k$, every $G_K$-homogeneous $K$-variety has a $K$-rational point. It is known that every split solvable linear algebraic group is very special. In
Externí odkaz:
https://doaj.org/article/13a3ac7e0c4a4b7e84781d296c83cd8b
Conservation laws are well-established in the context of Euclidean gradient flow dynamics, notably for linear or ReLU neural network training. Yet, their existence and principles for non-Euclidean geometries and momentum-based dynamics remain largely
Externí odkaz:
http://arxiv.org/abs/2405.12888
Publikováno v:
Frontiers in Psychology, Vol 11 (2021)
This study aimed to investigate how visual–spatial ability predicted academic achievement through arithmetic and reading abilities. Four hundred and ninety-nine Chinese children aged from 10.1 to 11.2 years were recruited and measured visual–spat
Externí odkaz:
https://doaj.org/article/6a96e621c4624fc2beb68de2aca0bd15
We study the convergence of gradient flow for the training of deep neural networks. If Residual Neural Networks are a popular example of very deep architectures, their training constitutes a challenging optimization problem due notably to the non-con
Externí odkaz:
http://arxiv.org/abs/2403.12887
Bilevel optimization aims to optimize an outer objective function that depends on the solution to an inner optimization problem. It is routinely used in Machine Learning, notably for hyperparameter tuning. The conventional method to compute the so-ca
Externí odkaz:
http://arxiv.org/abs/2402.16748
Transformers have achieved state-of-the-art performance in language modeling tasks. However, the reasons behind their tremendous success are still unclear. In this paper, towards a better understanding, we train a Transformer model on a simple next t
Externí odkaz:
http://arxiv.org/abs/2402.05787