Zobrazeno 1 - 10
of 132
pro vyhledávání: '"Kunze, Lars"'
Autor:
Sirgabsou, Yandika, Hardin, Benjamin, Leblanc, François, Raili, Efi, Salvini, Pericle, Jackson, David, Jirotka, Marina, Kunze, Lars
This paper addresses the critical issue of psychological safety in the design and operation of autonomous vehicles, which are increasingly integrated with artificial intelligence technologies. While traditional safety standards focus primarily on phy
Externí odkaz:
http://arxiv.org/abs/2411.05732
This paper proposes a method for on-demand scenario generation in simulation, grounded on real-world data. Evaluating the behaviour of Autonomous Vehicles (AVs) in both safety-critical and regular scenarios is essential for assessing their robustness
Externí odkaz:
http://arxiv.org/abs/2410.13514
This study explores the intersection of neural networks and classical robotics algorithms through the Neural Algorithmic Reasoning (NAR) framework, allowing to train neural networks to effectively reason like classical robotics algorithms by learning
Externí odkaz:
http://arxiv.org/abs/2410.11031
Transparency in automated systems could be afforded through the provision of intelligible explanations. While transparency is desirable, might it lead to catastrophic outcomes (such as anxiety), that could outweigh its benefits? It's quite unclear ho
Externí odkaz:
http://arxiv.org/abs/2408.08785
Autor:
Omeiza, Daniel, Somaiya, Pratik, Pattinson, Jo-Ann, Ten-Holter, Carolyn, Stilgoe, Jack, Jirotka, Marina, Kunze, Lars
As artificial intelligence (AI) technology advances, ensuring the robustness and safety of AI-driven systems has become paramount. However, varying perceptions of robustness among AI developers create misaligned evaluation metrics, complicating the a
Externí odkaz:
http://arxiv.org/abs/2408.08584
Autor:
Tekkesinoglu, Sule, Kunze, Lars
As machine learning becomes increasingly integral to autonomous decision-making processes involving human interaction, the necessity of comprehending the model's outputs through conversational means increases. Most recently, foundation models are bei
Externí odkaz:
http://arxiv.org/abs/2407.20990
Publikováno v:
2023 IEEE International Conference on Robotics and Automation (ICRA) 2023 IEEE International Conference on Robotics and Automation (ICRA) 2024 IEEE International Conference on Robotics and Automation (ICRA)
To operate in open-ended environments where humans interact in complex, diverse ways, autonomous robots must learn to predict their behaviour, especially when that behavior is potentially dangerous to other agents or to the robot. However, reducing t
Externí odkaz:
http://arxiv.org/abs/2407.10639
Autor:
Vardal, Ozan, Hawkins, Richard, Paterson, Colin, Picardi, Chiara, Omeiza, Daniel, Kunze, Lars, Habli, Ibrahim
For machine learning components used as part of autonomous systems (AS) in carrying out critical tasks it is crucial that assurance of the models can be maintained in the face of post-deployment changes (such as changes in the operating environment o
Externí odkaz:
http://arxiv.org/abs/2406.16220
Autor:
Howard, Rhys, Kunze, Lars
In this work we aim to bridge the divide between autonomous embodied systems and causal reasoning. Autonomous embodied systems have come to increasingly interact with humans, and in many cases may pose risks to the physical or mental well-being of th
Externí odkaz:
http://arxiv.org/abs/2406.01384
Robot object manipulation in real-world environments is challenging because robot operation must be robust to a range of sensing, estimation, and actuation uncertainties to avoid potentially unsafe and costly mistakes that are a barrier to their adop
Externí odkaz:
http://arxiv.org/abs/2403.14488