Zobrazeno 1 - 10
of 24
pro vyhledávání: '"Koenighofer, Bettina"'
In many Deep Reinforcement Learning (RL) problems, decisions in a trained policy vary in significance for the expected safety and performance of the policy. Since RL policies are very complex, testing efforts should concentrate on states in which the
Externí odkaz:
http://arxiv.org/abs/2411.07700
Autor:
Andriushchenko, Roman, Bork, Alexander, Budde, Carlos E., Češka, Milan, Grover, Kush, Hahn, Ernst Moritz, Hartmanns, Arnd, Israelsen, Bryant, Jansen, Nils, Jeppson, Joshua, Junges, Sebastian, Köhl, Maximilian A., Könighofer, Bettina, Křetínský, Jan, Meggendorfer, Tobias, Parker, David, Pranger, Stefan, Quatmann, Tim, Ruijters, Enno, Taylor, Landon, Volk, Matthias, Weininger, Maximilian, Zhang, Zhen
The analysis of formal models that include quantitative aspects such as timing or probabilistic choices is performed by quantitative verification tools. Broad and mature tool support is available for computing basic properties such as expected reward
Externí odkaz:
http://arxiv.org/abs/2405.13583
Autor:
Córdoba, Filip Cano, Palmisano, Alexander, Fränzle, Martin, Bloem, Roderick, Könighofer, Bettina
Agents operating in physical environments need to be able to handle delays in the input and output signals since neither data transmission nor sensing or actuating the environment are instantaneous. Shields are correct-by-construction runtime enforce
Externí odkaz:
http://arxiv.org/abs/2307.02164
Autor:
Córdoba, Filip Cano, Judson, Samuel, Antonopoulos, Timos, Bjørner, Katrine, Shoemaker, Nicholas, Shapiro, Scott J., Piskac, Ruzica, Könighofer, Bettina
Principled accountability for autonomous decision-making in uncertain environments requires distinguishing intentional outcomes from negligent designs from actual accidents. We propose analyzing the behavior of autonomous agents through a quantitativ
Externí odkaz:
http://arxiv.org/abs/2307.01532
Solving control tasks in complex environments automatically through learning offers great potential. While contemporary techniques from deep reinforcement learning (DRL) provide effective solutions, their decision-making is not transparent. We aim to
Externí odkaz:
http://arxiv.org/abs/2306.17204
Autor:
Judson, Samuel, Elacqua, Matthew, Cano, Filip, Antonopoulos, Timos, Könighofer, Bettina, Shapiro, Scott J., Piskac, Ruzica
Principled accountability in the aftermath of harms is essential to the trustworthy design and governance of algorithmic decision making. Legal theory offers a paramount method for assessing culpability: putting the agent 'on the stand' to subject th
Externí odkaz:
http://arxiv.org/abs/2305.05731
Besides the recent impressive results on reinforcement learning (RL), safety is still one of the major research challenges in RL. RL is a machine-learning approach to determine near-optimal policies in Markov decision processes (MDPs). In this paper,
Externí odkaz:
http://arxiv.org/abs/2212.01861
Autor:
Tappler, Martin, Pranger, Stefan, Könighofer, Bettina, Muškardin, Edi, Bloem, Roderick, Larsen, Kim
Safety is still one of the major research challenges in reinforcement learning (RL). In this paper, we address the problem of how to avoid safety violations of RL agents during exploration in probabilistic and partially unknown environments. Our appr
Externí odkaz:
http://arxiv.org/abs/2212.01838
Runtime enforcement refers to the theories, techniques, and tools for enforcing correct behavior with respect to a formal specification of systems at runtime. In this paper, we are interested in techniques for constructing runtime enforcers for the c
Externí odkaz:
http://arxiv.org/abs/2208.14426
Evaluation of deep reinforcement learning (RL) is inherently challenging. Especially the opaqueness of learned policies and the stochastic nature of both agents and environments make testing the behavior of deep RL agents difficult. We present a sear
Externí odkaz:
http://arxiv.org/abs/2205.04887