Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Ndiour, Ibrahima"'
AI deployed in many real-world use cases should be capable of adapting to novelties encountered after deployment. Here, we consider a challenging, under-explored and realistic continual adaptation problem: a deployed AI agent is continuously provided
Externí odkaz:
http://arxiv.org/abs/2412.09701
This paper presents a fast and principled approach for solving the visual anomaly detection and segmentation problem. In this setup, we have access to only anomaly-free training data and want to detect and identify anomalies of an arbitrary nature on
Externí odkaz:
http://arxiv.org/abs/2211.12650
This paper presents a fast, principled approach for detecting anomalous and out-of-distribution (OOD) samples in deep neural networks (DNN). We propose the application of linear statistical dimensionality reduction techniques on the semantic features
Externí odkaz:
http://arxiv.org/abs/2203.10422
Autor:
Ndiour, Ibrahima Jacques
This thesis tackles the visual tracking problem as a target contour estimation problem in the face of corrupted measurements. The major aim is to design robust recursive curve filters for accurate contour-based tracking. The state-space representatio
Externí odkaz:
http://hdl.handle.net/1853/37283
This brief sketches initial progress towards a unified energy-based solution for the semi-supervised visual anomaly detection and localization problem. In this setup, we have access to only anomaly-free training data and want to detect and identify a
Externí odkaz:
http://arxiv.org/abs/2105.03270
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN). Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to det
Externí odkaz:
http://arxiv.org/abs/2012.04250
Data poisoning attacks compromise the integrity of machine-learning models by introducing malicious training samples to influence the results during test time. In this work, we investigate backdoor data poisoning attack on deep neural networks (DNNs)
Externí odkaz:
http://arxiv.org/abs/1912.01206
We present a principled approach for detecting out-of-distribution (OOD) and adversarial samples in deep neural networks. Our approach consists in modeling the outputs of the various layers (deep features) with parametric probability distributions on
Externí odkaz:
http://arxiv.org/abs/1909.11786
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.