Zobrazeno 1 - 10
of 14
pro vyhledávání: '"David Lopez-Paz"'
Autor:
Mohammad Lotfollahi, Anna Klimovskaia Susmelj, Carlo De Donno, Leon Hetzel, Yuge Ji, Ignacio L Ibarra, Sanjay R Srivatsan, Mohsen Naghipourfar, Riza M Daza, Beth Martin, Jay Shendure, Jose L McFaline‐Figueroa, Pierre Boyeau, F Alexander Wolf, Nafissa Yakubova, Stephan Günnemann, Cole Trapnell, David Lopez‐Paz, Fabian J Theis
Publikováno v:
Molecular Systems Biology, Vol 19, Iss 6, Pp 1-19 (2023)
Abstract Recent advances in multiplexed single‐cell transcriptomics experiments facilitate the high‐throughput study of drug and genetic perturbations. However, an exhaustive exploration of the combinatorial perturbation space is experimentally u
Externí odkaz:
https://doaj.org/article/418ea36376f0413cb228551ff196f2db
Publikováno v:
Nature Communications, Vol 11, Iss 1, Pp 1-9 (2020)
The discovery of hierarchies in biological processes is central to developmental biology. Here the authors propose Poincaré maps, a method based on hyperbolic geometry to discover continuous hierarchies from pairwise similarities.
Externí odkaz:
https://doaj.org/article/d26503839ec34d59b49e180b57c33053
Publikováno v:
NeuroImage, Vol 220, Iss , Pp 117028- (2020)
Identifying causes solely from observations can be particularly challenging when i) the factors under investigation are difficult to manipulate independently from one-another and ii) observations are high-dimensional. To address this issue, we introd
Externí odkaz:
https://doaj.org/article/6cfc515d66a14d6c8471e74afebd65b3
Autor:
Alex Lamb, Arno Solin, Juho Kannala, Vikas Verma, David Lopez-Paz, Kenji Kawaguchi, Yoshua Bengio
Publikováno v:
Neural Networks. 145:90-106
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to b
Publikováno v:
Adv Neural Inf Process Syst
Recent work demonstrates that deep neural networks trained using Empirical Risk Minimization (ERM) can generalize under distribution shift, outperforming specialized training algorithms for domain generalization. The goal of this paper is to further
Autor:
Ignacio L. Ibarra, Yuge Ji, Anna Klimovskaia Susmelj, David Lopez-Paz, Nafissa Yakubova, Fabian J. Theis, F. Alexander Wolf, Carlo De Donno, Mohammad Lotfollahi
Recent advances in multiplexed single-cell transcriptomics experiments are facilitating the high-throughput study of drug and genetic perturbations. However, an exhaustive exploration of the combinatorial perturbation space is experimentally unfeasib
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::ceabde4e009ed574284dfd6ace6a95a8
https://doi.org/10.1101/2021.04.14.439903
https://doi.org/10.1101/2021.04.14.439903
Publikováno v:
Nature Communications
Nature Communications, Vol 11, Iss 1, Pp 1-9 (2020)
Nature Communications, Vol 11, Iss 1, Pp 1-9 (2020)
The need to understand cell developmental processes spawned a plethora of computational methods for discovering hierarchies from scRNAseq data. However, existing techniques are based on Euclidean geometry, a suboptimal choice for modeling complex cel
Publikováno v:
NeuroImage
NeuroImage, Elsevier, 2020, 220, ⟨10.1016/j.neuroimage.2020.117028⟩
NeuroImage, Vol 220, Iss, Pp 117028-(2020)
NeuroImage, Elsevier, 2020, 220, ⟨10.1016/j.neuroimage.2020.117028⟩
NeuroImage, Vol 220, Iss, Pp 117028-(2020)
International audience; Identifying causes solely from observations can be particularly challenging when i) the factors under investigation are difficult to manipulate independently from one-another and ii) observations are high-dimensional. To addre
Publikováno v:
IJCAI
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to b
Publikováno v:
Braverman Readings in Machine Learning. Key Ideas from Inception to Current State ISBN: 9783319994918
Braverman Readings in Machine Learning
Braverman Readings in Machine Learning
Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean D
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::a73143db72512d3d775fd68bc7d2f6c4
https://doi.org/10.1007/978-3-319-99492-5_11
https://doi.org/10.1007/978-3-319-99492-5_11