Zobrazeno 1 - 10
of 38
pro vyhledávání: '"Asadulaev, Arip"'
Autor:
Asadulaev, Arip, Korst, Rostislav, Korotin, Alexander, Egiazarian, Vage, Filchenkov, Andrey, Burnaev, Evgeny
We propose a novel algorithm for offline reinforcement learning using optimal transport. Typically, in offline reinforcement learning, the data is provided by various experts and some of them can be sub-optimal. To extract an efficient policy, it is
Externí odkaz:
http://arxiv.org/abs/2410.14069
Autor:
Persiianov, Mikhail, Asadulaev, Arip, Andreev, Nikita, Starodubcev, Nikita, Baranchuk, Dmitry, Kratsios, Anastasis, Burnaev, Evgeny, Korotin, Alexander
Learning conditional distributions $\pi^*(\cdot|x)$ is a central problem in machine learning, which is typically approached via supervised methods with paired data $(x,y) \sim \pi^*$. However, acquiring paired data samples is often challenging, espec
Externí odkaz:
http://arxiv.org/abs/2410.02628
While the continuous Entropic Optimal Transport (EOT) field has been actively developing in recent years, it became evident that the classic EOT problem is prone to different issues like the sensitivity to outliers and imbalance of classes in the sou
Externí odkaz:
http://arxiv.org/abs/2303.07988
Autor:
Korst, Rostislav, Asadulaev, Arip
We propose the novel framework for generative modelling using hybrid energy-based models. In our method we combine the interpretable input gradients of the robust classifier and Langevin Dynamics for sampling. Using the adversarial training we improv
Externí odkaz:
http://arxiv.org/abs/2207.08950
Adversarial examples are transferable between different models. In our paper, we propose to use this property for multi-step domain adaptation. In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversari
Externí odkaz:
http://arxiv.org/abs/2207.08948
It was shown that adversarial examples improve object recognition. But what about their opposite side, easy examples? Easy examples are samples that the machine learning model classifies correctly with high confidence. In our paper, we are making the
Externí odkaz:
http://arxiv.org/abs/2207.08940
We present a novel algorithm for domain adaptation using optimal transport. In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain. In our method, we use optimal transport to map target sampl
Externí odkaz:
http://arxiv.org/abs/2205.15424
We introduce a novel neural network-based algorithm to compute optimal transport (OT) plans for general cost functionals. In contrast to common Euclidean costs, i.e., $\ell^1$ or $\ell^2$, such functionals provide more flexibility and allow using aux
Externí odkaz:
http://arxiv.org/abs/2205.15403
Since the publication of the original Transformer architecture (Vaswani et al. 2017), Transformers revolutionized the field of Natural Language Processing. This, mainly due to their ability to understand timely dependencies better than competing RNN-
Externí odkaz:
http://arxiv.org/abs/2010.12698
We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance). The algorithm uses input convex neural networks and a cycle-consistency regularization to approximate Wassers
Externí odkaz:
http://arxiv.org/abs/1909.13082