Zobrazeno 1 - 10
of 20
pro vyhledávání: '"Gyevnar, Balint"'
Autor:
Gyevnar, Balint, Droop, Stephanie, Quillien, Tadeg, Cohen, Shay B., Bramley, Neil R., Lucas, Christopher G., Albrecht, Stefano V.
Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations, whether causal, counterfactual, or teleological (i.e., purpose-oriented). Understanding the relevance of these concept
Externí odkaz:
http://arxiv.org/abs/2403.08828
Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, inscrutable AI systems exacerbate the existing chall
Externí odkaz:
http://arxiv.org/abs/2402.10086
We present CEMA: Causal Explanations in Multi-Agent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents. Unlike prior wor
Externí odkaz:
http://arxiv.org/abs/2302.10809
The European Union has proposed the Artificial Intelligence Act which introduces detailed requirements of transparency for AI systems. Many of these requirements can be addressed by the field of explainable AI (XAI), however, there is a fundamental d
Externí odkaz:
http://arxiv.org/abs/2302.10766
Autor:
Ahmed, Ibrahim H., Brewitt, Cillian, Carlucho, Ignacio, Christianos, Filippos, Dunion, Mhairi, Fosong, Elliot, Garcin, Samuel, Guo, Shangmin, Gyevnar, Balint, McInroe, Trevor, Papoudakis, Georgios, Rahman, Arrasy, Schäfer, Lukas, Tamborski, Massimiliano, Vecchio, Giuseppe, Wang, Cheng, Albrecht, Stefano V.
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel ma
Externí odkaz:
http://arxiv.org/abs/2208.01769
Autor:
Gyevnar, Balint, Tamborski, Massimiliano, Wang, Cheng, Lucas, Christopher G., Cohen, Shay B., Albrecht, Stefano V.
Inscrutable AI systems are difficult to trust, especially if they operate in safety-critical settings like autonomous driving. Therefore, there is a need to build transparent and queryable systems to increase trust levels. We propose a transparent, h
Externí odkaz:
http://arxiv.org/abs/2206.08783
It is important for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. This is a difficult problem, especially in urb
Externí odkaz:
http://arxiv.org/abs/2103.06113
Autor:
Albrecht, Stefano V., Brewitt, Cillian, Wilhelm, John, Gyevnar, Balint, Eiras, Francisco, Dobre, Mihai, Ramamoorthy, Subramanian
We propose an integrated prediction and planning system for autonomous driving which uses rational inverse planning to recognise the goals of other vehicles. Goal recognition informs a Monte Carlo Tree Search (MCTS) algorithm to plan optimal maneuver
Externí odkaz:
http://arxiv.org/abs/2002.02277
We present CEMA: Causal Explanations for Multi-Agent decision-making; a system to generate causal explanations for agents' decisions in stochastic sequential multi-agent environments. The core of CEMA is a novel causal selection method which, unlike
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4f13ca4320be8d1cc19f6968074a9671
http://arxiv.org/abs/2302.10809
http://arxiv.org/abs/2302.10809
The European Union has proposed the Artificial Intelligence Act which introduces a proportional risk-based approach to AI regulation including detailed requirements for transparency and explainability. Many of these requirements may be addressed in p
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::22300d6a545b42bf3b3013efe9255406