Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Mathias Lechner"'
Autor:
Ramin Hasani, Mathias Lechner, Alexander Amini, Lucas Liebenwein, Aaron Ray, Max Tschaikowski, Gerald Teschl, Daniela Rus
Publikováno v:
Hasani, R, Lechner, M, Amini, A, Liebenwein, L, Ray, A, Tschaikowski, M, Teschl, G & Rus, D 2022, ' Closed-form continuous-time neural networks ', Nature Machine Intelligence, vol. 4, no. 11, pp. 992-1003 . https://doi.org/10.1038/s42256-022-00556-7
Continuous-time neural networks are a class of machine learning systems that can tackle representation learning on spatiotemporal decision-making tasks. These models are typically represented by continuous differential equations. However, their expre
Autor:
Makram Chahine, Ramin Hasani, Patrick Kao, Aaron Ray, Ryan Shubert, Mathias Lechner, Alexander Amini, Daniela Rus
Publikováno v:
Science Robotics. 8
Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations and generalize well to online and unseen scenarios within the same environment they have been trained on. It is challenging for these agents to take a st
Publikováno v:
Tools and Algorithms for the Construction and Analysis of Systems ISBN: 9783031308222
Reinforcement learning has received much attention for learning controllers of deterministic systems. We consider a learner-verifier framework for stochastic control systems and survey recent methods that formally guarantee a conjunction of reachabil
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::11538b1574440e1d45449814a5a9df38
https://doi.org/10.1007/978-3-031-30823-9_1
https://doi.org/10.1007/978-3-031-30823-9_1
Publikováno v:
Computer Graphics Forum. 40:253-264
While convolutional neural networks (CNNs) have found wide adoption as state-of-the-art models for image-related tasks, their predictions are often highly sensitive to small input perturbations, which the human vision is robust against. This paper pr
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:3787-3795
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. However, most methods are designed for verification of an idealized m
Autor:
Axel Brunnbauer, Luigi Berducci, Andreas Brandstatter, Mathias Lechner, Ramin Hasani, Daniela Rus, Radu Grosu
Publikováno v:
2022 International Conference on Robotics and Automation (ICRA).
World models learn behaviors in a latent imagination space to enhance the sample-efficiency of deep reinforcement learning (RL) algorithms. While learning world models for high-dimensional observations (e.g., pixel inputs) has become practicable on s
Adversarial training (i.e., training on adversarially perturbed input data) is a well-studied method for making neural networks robust to potential adversarial attacks during inference. However, the improved robustness does not come for free but rath
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1b51df681977a74bf82d3e9c145ebe06
http://arxiv.org/abs/2204.07373
http://arxiv.org/abs/2204.07373
Publikováno v:
Tools and Algorithms for the Construction and Analysis of Systems
Tools and Algorithms for the Construction and Analysis of Systems ISBN: 9783030452360
TACAS (2)
Tools and Algorithms for the Construction and Analysis of Systems ISBN: 9783030452360
TACAS (2)
Quantization converts neural networks into low-bit fixed-point computations which can be carried out by efficient integer-only hardware, and is standard practice for the deployment of neural networks on real-time embedded devices. However, like their
Publikováno v:
ICRA
2021 IEEE International Conference on Robotics and Automation (ICRA)
2021 IEEE International Conference on Robotics and Automation (ICRA)
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop. While adversarial training appears to enhance the robustness and safety of a deep m
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::93318520f9793e6451b74440c464c61e
http://arxiv.org/abs/2103.08187
http://arxiv.org/abs/2103.08187
Autor:
Scott A. Smolka, Sophie Gruenbacher, Mathias Lechner, Radu Grosu, Md. Ariful Islam, Jacek Cyranka
Publikováno v:
CDC
We introduce LRT-NG, a set of techniques and an associated toolset that computes a reachtube (an over-approximation of the set of reachable states over a given time horizon) of a nonlinear dynamical system. LRT-NG significantly advances the state-of-
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ca28aaf2efef8210a234d6ff9bcfe1a6
http://arxiv.org/abs/2012.07458
http://arxiv.org/abs/2012.07458