Zobrazeno 1 - 10
of 45
pro vyhledávání: '"Laugros, A"'
Synthetic corruptions gathered into a benchmark are frequently used to measure neural network robustness to distribution shifts. However, robustness to synthetic corruption benchmarks is not always predictive of robustness to distribution shifts enco
Externí odkaz:
http://arxiv.org/abs/2107.12052
Neural Networks are sensitive to various corruptions that usually occur in real-world applications such as blurs, noises, low-lighting conditions, etc. To estimate the robustness of neural networks to these common corruptions, we generally use a grou
Externí odkaz:
http://arxiv.org/abs/2105.12357
Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that protect against a wide
Externí odkaz:
http://arxiv.org/abs/2008.08384
Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has re
Externí odkaz:
http://arxiv.org/abs/1909.02436
Autor:
Laugros, Alfred
The unprecedented high performances of artificial neural networks in various computer vision tasks, have drawn the interest of both academic and industrial actors. Indeed, neural networks have shown promising results when trying to detect tumors on x
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______166::a33eeb37d3509e1ff388eb4f31735fd3
https://theses.hal.science/tel-03702340
https://theses.hal.science/tel-03702340
Publikováno v:
2021 IEEE International Conference on Image Processing (ICIP).
Neural Networks are sensitive to various corruptions that usually occur in real-world applications such as blurs, noises, low-lighting conditions, etc. To estimate the robustness of neural networks to these common corruptions, we generally use a grou
Publikováno v:
ICCV 2019-International Conference on Computer Vision
ICCV 2019-International Conference on Computer Vision, Oct 2019, Séoul, South Korea
ICCV Workshops
HAL
ICCV 2019-International Conference on Computer Vision, Oct 2019, Séoul, South Korea
ICCV Workshops
HAL
Neural Networks have been shown to be sensitive to common perturbations such as blur, Gaussian noise, rotations, etc. They are also vulnerable to some artificial malicious corruptions called adversarial examples. The adversarial examples study has re
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e5f4f348186293d8dca0f56dfcdf5686
https://hal.archives-ouvertes.fr/hal-02370774/document
https://hal.archives-ouvertes.fr/hal-02370774/document
Publikováno v:
ECCV 2020 RLQ Workshop
ECCV 2020-16th European Conference on Computer Vision
ECCV 2020-16th European Conference on Computer Vision, Aug 2020, Glasgow, United Kingdom
HAL
Computer Vision – ECCV 2020 Workshops ISBN: 9783030682378
ECCV Workshops (5)
ECCV 2020-16th European Conference on Computer Vision
ECCV 2020-16th European Conference on Computer Vision, Aug 2020, Glasgow, United Kingdom
HAL
Computer Vision – ECCV 2020 Workshops ISBN: 9783030682378
ECCV Workshops (5)
International audience; Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::da0039ee423a1d39ac68166daefb8457
https://hal.archives-ouvertes.fr/hal-02925252/document
https://hal.archives-ouvertes.fr/hal-02925252/document
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.