Zobrazeno 1 - 7
of 7
pro vyhledávání: '"René Raab"'
Autor:
Matthias Zuerl, Philip Stoll, Ingrid Brehm, Jonas Sueskind, René Raab, Jan Petermann, Dario Zanca, Ralph Simon, Lorenzo von Fersen, Bjoern Eskofier
Publikováno v:
Ecological Informatics, Vol 83, Iss , Pp 102840- (2024)
The welfare of animals under human care is often assessed by observing behaviours indicative of stress or discomfort, such as stereotypical behaviour (SB), which often shows as repetitive, invariant pacing. Traditional behaviour monitoring methods, h
Externí odkaz:
https://doaj.org/article/3aab6734b80346bbadff98b5a74f66a4
Autor:
Matthias Zuerl, Philip Stoll, Ingrid Brehm, René Raab, Dario Zanca, Samira Kabri, Johanna Happold, Heiko Nille, Katharina Prechtel, Sophie Wuensch, Marie Krause, Stefan Seegerer, Lorenzo von Fersen, Bjoern Eskofier
Publikováno v:
Animals, Vol 12, Iss 6, p 692 (2022)
The monitoring of animals under human care is a crucial tool for biologists and zookeepers to keep track of the animals’ physical and psychological health. Additionally, it enables the analysis of observed behavioral changes and helps to unravel un
Externí odkaz:
https://doaj.org/article/5a854963b939435d93f0f6dd7667960b
Patient-centered health care information systems (PHSs) on peer-to-peer (P2P) networks (e.g., decentralized personal health records) enable storing data locally at the edge to enhance data sovereignty and resilience to single points of failure. Nonet
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::3b57667e0fe21a75dfd45874d62ce5b3
https://opus4.kobv.de/opus4-fau/files/22833/ijerph-20-05378.pdf
https://opus4.kobv.de/opus4-fau/files/22833/ijerph-20-05378.pdf
Autor:
René Raab, Daniel Tenbrinck, Bjoern M. Eskofier, Dario Zanca, An Nguyen, Leo Schwinn, Martin Burger
Publikováno v:
IJCNN
The vulnerability of deep neural networks to small and even imperceptible perturbations has become a central topic in deep learning research. Although several sophisticated defense mechanisms have been introduced, most were later shown to be ineffect
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it challenging to identify promising ap
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fd053a83dfd777890a795e0b4ca7fbbe
http://arxiv.org/abs/2105.10304
http://arxiv.org/abs/2105.10304
Autor:
René Raab, Driessens, K.
Publikováno v:
BNAIC 2019
Scopus-Elsevier
Scopus-Elsevier
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::01f798d3c58115f47dfa1d3eb461c8c9
https://cris.maastrichtuniversity.nl/en/publications/b22745db-796c-4883-a316-a49cc478269c
https://cris.maastrichtuniversity.nl/en/publications/b22745db-796c-4883-a316-a49cc478269c
Publikováno v:
Lecture Notes in Computer Science
Lecture Notes in Computer Science-Scale Space and Variational Methods in Computer Vision
Lecture Notes in Computer Science ISBN: 9783030755485
SSVM
Scale Space and Variational Methods in Computer Vision. SSVM 2021. Lecture Notes in Computer Science
Lecture Notes in Computer Science-Scale Space and Variational Methods in Computer Vision
Lecture Notes in Computer Science ISBN: 9783030755485
SSVM
Scale Space and Variational Methods in Computer Vision. SSVM 2021. Lecture Notes in Computer Science
Despite the large success of deep neural networks (DNN) in recent years, most neural networks still lack mathematical guarantees in terms of stability. For instance, DNNs are vulnerable to small or even imperceptible input perturbations, so called ad
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a1912ff03c3f8e32ac884ddaf8832876