Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Serin Varghese"'
Autor:
Yasin Bayzidi, Alen Smajic, Fabian Huger, Ruby Moritz, Serin Varghese, Peter Schlicht, Alois Knoll
Publikováno v:
2022 IEEE Intelligent Vehicles Symposium (IV).
Publikováno v:
Deep Neural Networks and Data for Automated Driving ISBN: 9783031012327
Modern deep neural networks (DNNs) are achieving state-of-the-art results due to their capability to learn a faithful representation of the data they are trained on. In this chapter, we address two insufficiencies of DNNs, namely, the lack of robustn
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::580e34fc37861acbf3e145a51b7fa8d3
https://doi.org/10.1007/978-3-031-01233-4_15
https://doi.org/10.1007/978-3-031-01233-4_15
Autor:
Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Publikováno v:
Deep Neural Networks and Data for Automated Driving ISBN: 9783031012327
Fingscheidt, Gottschalk et al. (Hg.): Deep Neural Networks and Data for Automated Driving. Robustness, Uncertainty Quantification, and Insights Towards Safety
Fingscheidt, Gottschalk et al. (Hg.): Deep Neural Networks and Data for Automated Driving. Robustness, Uncertainty Quantification, and Insights Towards Safety
The use of deep neural networks (DNNs) in safety-critical applications like mobile health and autonomous driving is challenging due to numerous model-inherent shortcomings. These shortcomings are diverse and range from a lack of generalization over i
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a97320425a5b7fac20b0272cdc9fe19b
https://doi.org/10.1007/978-3-031-01233-4_1
https://doi.org/10.1007/978-3-031-01233-4_1
Publikováno v:
IJCNN
Instance segmentation with neural networks is an essential task in environment perception. In many works, it has been observed that neural networks can predict false positive instances with high confidence values and true positives with low ones. Thu
Autor:
Marvin Klingner, Andreas Bär, Sharat Gujamagadi, Serin Varghese, Jan David Schneider, Fabian Hüger, Nikhil Kapoor, Kira Maag, Peter Schlicht, Tim Fingscheidt
Publikováno v:
CVPR Workshops
Deep neural networks (DNNs) for highly automated driving are often trained on a large and diverse dataset, and evaluation metrics are reported usually on a per-frame basis. However, when evaluated on video sequences, the predictions are often unstabl
Autor:
Nikhil Kapoor, Andreas Bär, Peter Schlicht, Tim Fingscheidt, Jonas Löhdefink, Serin Varghese, Fabian Hüger
Enabling autonomous driving (AD) can be considered one of the biggest challenges in today's technology. AD is a complex task accomplished by several functionalities, with environment perception being one of its core functions. Environment perception
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::466b67779210ddecc054d72751b21a2b
Autor:
Serin Varghese, Jonas Löhdefink, Nikhil Kapoor, Nico M. Schmidt, Chun Yuan, Tim Fingscheidt, Roland Zimmerman, Peter Schlicht, Fabian Hüger
Publikováno v:
CSCS
Deep neural networks are often not robust to semantically-irrelevant changes in the input. In this work we address the issue of robustness of state-of-the-art deep convolutional neural networks (CNNs) against commonly occurring distortions in the inp
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5af9a622335a631fea0a61670758a14c
http://arxiv.org/abs/2012.01386
http://arxiv.org/abs/2012.01386
Publikováno v:
CVPR Workshops
The lack of robustness shown by deep neural networks (DNNs) questions their deployment in safety-critical tasks, such as autonomous driving. We pick up the recently introduced redundant teacher-student frameworks (3 DNNs) and propose in this work a n
Autor:
Jan David Schneider, Fabian Hüger, Andreas Bär, Nico M. Schmidt, Tim Fingscheidt, Yasin Bayzidi, Serin Varghese, Sounak Lahiri, Peter Schlicht, Nikhil Kapoor
Publikováno v:
CVPR Workshops
Commonly used metrics to evaluate semantic segmentation such as mean intersection over union (mIoU) do not incorporate temporal consistency. A straightforward extension of existing metrics towards evaluating the consistency of segmentation of video s
Autor:
Serin Varghese, Andreas Bär, Nikhil Kapoor, Peter Schlicht, Tim Fingscheidt, Fabian Hüger, Jan David Schneider
Publikováno v:
IJCNN
Despite recent advancements, deep neural networks are not robust against adversarial perturbations. Many of the proposed adversarial defense approaches use computationally expensive training mechanisms that do not scale to complex real-world tasks su
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4e0cfc37b449a85a2cc794a48d402c7c