Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Marco Favorito"'
Publikováno v:
Computers in Industry. 149:103916
Autor:
Giuseppe De Giacomo, Marco Favorito
Publikováno v:
Sapienza Università di Roma-IRIS
The translation from temporal logics to automata is the workhorse algorithm of several techniques in computer science and AI, such as reactive synthesis, reasoning about actions, FOND planning with temporal specifications, and reinforcement learning
Publikováno v:
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence.
Synthesis techniques for temporal logic specifications are typically based on exploiting symbolic techniques, as done in model checking. These symbolic techniques typically use backward fixpoint computation. Planning, which can be seen as a specific
Publikováno v:
AAAI
Proceedings of the AAAI Conference on Artificial Intelligence
Proceedings of the AAAI Conference on Artificial Intelligence
In this work we have investigated the concept of “restraining bolt”, inspired by Science Fiction. We have two distinct sets of features extracted from the world, one by the agent and one by the authority imposing some restraining specifications o
Publikováno v:
Engineering Multi-Agent Systems ISBN: 9783030974565
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::825b34717b8b5fff87b845ab6ce02618
https://doi.org/10.1007/978-3-030-97457-2_14
https://doi.org/10.1007/978-3-030-97457-2_14
Autor:
Marco Benedetti, Gennaro Catapano, Francesco De Sclavis, Marco Favorito, Aldo Glielmo, Davide Magnanimi, Antonio Muci
Publikováno v:
Journal of Open Source Software. 7:4622
Publikováno v:
Engineering Multi-Agent Systems ISBN: 9783030974565
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::eb5aa7a6f9e5335057949505a3061dc7
http://hdl.handle.net/11573/1621258
http://hdl.handle.net/11573/1621258
Publikováno v:
Scopus-Elsevier
ICAART (1)
ICAART (1)
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4feb2cea6b879056ce659313117c2e99
http://hdl.handle.net/11573/1621548
http://hdl.handle.net/11573/1621548
Publikováno v:
Sapienza Università di Roma-IRIS
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::dc3881a000698825b245cce931010de5
http://hdl.handle.net/11573/1611250
http://hdl.handle.net/11573/1611250
Publikováno v:
Scopus-Elsevier
A common problem in Reinforcement Learning (RL) is that the reward function is hard to express. This can be overcome by resorting to Inverse Reinforcement Learning (IRL), which consists in first obtaining a reward function from a set of execution tra
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0a2ba4081dcdad53b7a296b115098ac7