Limits of Probabilistic Safety Guarantees when Considering Human Uncertainty
Autor: | Richard M. Murray, Richard Cheng, Joel W. Burdick |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Mathematical optimization Current (mathematics) Computer science Probabilistic logic Systems and Control (eess.SY) Electrical Engineering and Systems Science - Systems and Control Data modeling Computer Science - Robotics Order (business) Confidence bounds FOS: Electrical engineering electronic engineering information engineering Robot Computer Science - Multiagent Systems Robotics (cs.RO) Multiagent Systems (cs.MA) |
Zdroj: | ICRA |
DOI: | 10.1109/icra48506.2021.9561843 |
Popis: | When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence tubes that contain trajectories with probability $\delta$), which can then be used to guarantee safety with probability $1-\delta$. However, almost all existing works consider $\delta \geq 0.001$. The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with $\delta < 10^{-8}$, and (2) current learning-based methods are ill-equipped to compute accurate confidence bounds at such low $\delta$. Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for $\delta \leq 10^{-8}$. These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems. Comment: ICRA 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |