Zobrazeno 1 - 10
of 92
pro vyhledávání: '"Hajri, Hatem"'
Autor:
Schott, Lucas, Delas, Josephine, Hajri, Hatem, Gherbi, Elies, Yaich, Reda, Boulahia-Cuppens, Nora, Cuppens, Frederic, Lamprier, Sylvain
Deep Reinforcement Learning (DRL) is an approach for training autonomous agents across various complex environments. Despite its significant performance in well known environments, it remains susceptible to minor conditions variations, raising concer
Externí odkaz:
http://arxiv.org/abs/2403.00420
Autor:
Gonzalez, Martin, Fernandez, Nelson, Tran, Thuy, Gherbi, Elies, Hajri, Hatem, Masmoudi, Nader
A potent class of generative models known as Diffusion Probabilistic Models (DPMs) has become prominent. A forward diffusion process adds gradually noise to data, while a model learns to gradually denoise. Sampling from pre-trained DPMs is obtained b
Externí odkaz:
http://arxiv.org/abs/2305.14267
Certification of neural networks is an important and challenging problem that has been attracting the attention of the machine learning community since few years. In this paper, we focus on randomized smoothing (RS) which is considered as the state-o
Externí odkaz:
http://arxiv.org/abs/2206.10235
We investigate the problems and challenges of evaluating the robustness of Differential Equation-based (DE) networks against synthetic distribution shifts. We propose a novel and simple accuracy metric which can be used to evaluate intrinsic robustne
Externí odkaz:
http://arxiv.org/abs/2206.08237
Publikováno v:
Systems & Control Letters 173 (2023)
In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and Long Short-Term Memory (ODE-LSTM) networks can be algorithmically embeddeded into the class of polynomial systems. This embedding preserves input-output behavior and can suitabl
Externí odkaz:
http://arxiv.org/abs/2205.11989
Publikováno v:
2022 International Joint Conference on Neural Networks (IJCNN), 2022, pp. 1-8
To improve policy robustness of deep reinforcement learning agents, a line of recent works focus on producing disturbances of the environment. Existing approaches of the literature to generate meaningful disturbances of the environment are adversaria
Externí odkaz:
http://arxiv.org/abs/2104.03154
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which on
Externí odkaz:
http://arxiv.org/abs/2011.12423
Neural network classifiers (NNCs) are known to be vulnerable to malicious adversarial perturbations of inputs including those modifying a small fraction of the input features named sparse or $L_0$ attacks. Effective and fast $L_0$ attacks, such as th
Externí odkaz:
http://arxiv.org/abs/2007.06032
Autor:
Miolane, Nina, Brigant, Alice Le, Mathe, Johan, Hou, Benjamin, Guigui, Nicolas, Thanwerdas, Yann, Heyder, Stefan, Peltre, Olivier, Koep, Niklas, Zaatiti, Hadi, Hajri, Hatem, Cabanes, Yann, Gerald, Thomas, Chauchat, Paul, Shewmake, Christian, Kainz, Bernhard, Donnat, Claire, Holmes, Susan, Pennec, Xavier
We introduce Geomstats, an open-source Python toolbox for computations and statistics on nonlinear manifolds, such as hyperbolic spaces, spaces of symmetric positive definite matrices, Lie groups of transformations, and many more. We provide object-o
Externí odkaz:
http://arxiv.org/abs/2004.04667
Autor:
Harb, Jeanine, Rébéna, Nicolas, Chosidow, Raphaël, Roblin, Grégoire, Potarusov, Roman, Hajri, Hatem
In the realm of autonomous transportation, there have been many initiatives for open-sourcing self-driving cars datasets, but much less for alternative methods of transportation such as trains. In this paper, we aim to bridge the gap by introducing F
Externí odkaz:
http://arxiv.org/abs/2002.05665