Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Laugros, Alfred"'
Synthetic corruptions gathered into a benchmark are frequently used to measure neural network robustness to distribution shifts. However, robustness to synthetic corruption benchmarks is not always predictive of robustness to distribution shifts enco
Externí odkaz:
http://arxiv.org/abs/2107.12052
Neural Networks are sensitive to various corruptions that usually occur in real-world applications such as blurs, noises, low-lighting conditions, etc. To estimate the robustness of neural networks to these common corruptions, we generally use a grou
Externí odkaz:
http://arxiv.org/abs/2105.12357
Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that protect against a wide
Externí odkaz:
http://arxiv.org/abs/2008.08384
Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has re
Externí odkaz:
http://arxiv.org/abs/1909.02436
Autor:
Laugros, Alfred
The unprecedented high performances of artificial neural networks in various computer vision tasks, have drawn the interest of both academic and industrial actors. Indeed, neural networks have shown promising results when trying to detect tumors on x
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______166::a33eeb37d3509e1ff388eb4f31735fd3
https://theses.hal.science/tel-03702340
https://theses.hal.science/tel-03702340
Publikováno v:
ICCV 2019-International Conference on Computer Vision
ICCV 2019-International Conference on Computer Vision, Oct 2019, Séoul, South Korea
ICCV Workshops
HAL
ICCV 2019-International Conference on Computer Vision, Oct 2019, Séoul, South Korea
ICCV Workshops
HAL
Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has re
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e5f4f348186293d8dca0f56dfcdf5686
https://hal.archives-ouvertes.fr/hal-02370774/document
https://hal.archives-ouvertes.fr/hal-02370774/document