Zobrazeno 1 - 10
of 791
pro vyhledávání: '"A, Coquelin"'
The gradients used to train neural networks are typically computed using backpropagation. While an efficient way to obtain exact gradients, backpropagation is computationally expensive, hinders parallelization, and is biologically implausible. Forwar
Externí odkaz:
http://arxiv.org/abs/2410.17764
Autor:
Coquelin, Daniel, Flügel, Katherina, Weiel, Marie, Kiefer, Nicholas, Öz, Muhammed, Debus, Charlotte, Streit, Achim, Götz, Markus
Communication bottlenecks severely hinder the scalability of distributed neural network training, particularly in high-performance computing (HPC) environments. We introduce AB-training, a novel data-parallel method that leverages low-rank representa
Externí odkaz:
http://arxiv.org/abs/2405.01067
Autor:
Coquelin, Daniel, Flügel, Katharina, Weiel, Marie, Kiefer, Nicholas, Debus, Charlotte, Streit, Achim, Götz, Markus
This study explores the learning dynamics of neural networks by analyzing the singular value decomposition (SVD) of their weights throughout training. Our investigation reveals that an orthogonal basis within each multidimensional weight's SVD repres
Externí odkaz:
http://arxiv.org/abs/2401.08505
Autor:
Flügel, Katharina, Coquelin, Daniel, Weiel, Marie, Debus, Charlotte, Streit, Achim, Götz, Markus
Backpropagation has long been criticized for being biologically implausible, relying on concepts that are not viable in natural learning processes. This paper proposes an alternative approach to solve two core issues, i.e., weight transport and updat
Externí odkaz:
http://arxiv.org/abs/2304.13372
Autor:
Taubert, Oskar, Weiel, Marie, Coquelin, Daniel, Farshian, Anis, Debus, Charlotte, Schug, Alexander, Streit, Achim, Götz, Markus
We present Propulate, an evolutionary optimization algorithm and software package for global optimization and in particular hyperparameter search. For efficient use of HPC resources, Propulate omits the synchronization after each generation as done i
Externí odkaz:
http://arxiv.org/abs/2301.08713
Autor:
Mélanie Coquelin, Céline Kopp-Bigault, Canelle Barinoil, Sofian Berrouiguet, Cinzia Guarnaccia
Publikováno v:
Frontiers in Psychology, Vol 15 (2024)
BackgroundBereavement following suicide is a risk factor for major depression, post-traumatic stress disorder, suicidal behavior, the emergence of bipolar disorders and prolonged mourning. The scientific literature agrees on the need to deploy suppor
Externí odkaz:
https://doaj.org/article/43636c5caeda4778b49ad5ddbeddc2cf
Autor:
Coquelin, Daniel, Rasti, Behnood, Götz, Markus, Ghamisi, Pedram, Gloaguen, Richard, Streit, Achim
As with any physical instrument, hyperspectral cameras induce different kinds of noise in the acquired data. Therefore, Hyperspectral denoising is a crucial step for analyzing hyperspectral images (HSIs). Conventional computational methods rarely use
Externí odkaz:
http://arxiv.org/abs/2204.06979
Autor:
Tuo Wei, Yehui Sun, Qiang Cheng, Sumanta Chatterjee, Zachary Traylor, Lindsay T. Johnson, Melissa L. Coquelin, Jialu Wang, Michael J. Torres, Xizhen Lian, Xu Wang, Yufen Xiao, Craig A. Hodges, Daniel J. Siegwart
Publikováno v:
Nature Communications, Vol 14, Iss 1, Pp 1-14 (2023)
Abstract Approximately 10% of Cystic Fibrosis (CF) patients, particularly those with CF transmembrane conductance regulator (CFTR) gene nonsense mutations, lack effective treatments. The potential of gene correction therapy through delivery of the CR
Externí odkaz:
https://doaj.org/article/220903d538f7496e9c1e3362c609fff3
Autor:
Zysman, Maéva, Coquelin, Anaëlle, Le Guen, Nelly, Solomiac, Agnès, Guecamburu, Marina, Erbault, Marie, Blanchard, Elodie, Roche, Nicolas, Morin, Sandrine
Publikováno v:
In Respiratory Medicine May 2024 226
Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)
Autor:
Coquelin, Daniel, Debus, Charlotte, Götz, Markus, von der Lehr, Fabrice, Kahn, James, Siggel, Martin, Streit, Achim
Publikováno v:
J Big Data 9, 14 (2022)
With increasing data and model complexities, the time required to train neural networks has become prohibitively large. To address the exponential rise in training time, users are turning to data parallel neural networks (DPNN) to utilize large-scale
Externí odkaz:
http://arxiv.org/abs/2104.05588