Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Denis Steckelmacher"'
Autor:
Leonardo Bertolucci Coelho, Dawei Zhang, Yves Van Ingelgem, Denis Steckelmacher, Ann Nowé, Herman Terryn
Publikováno v:
npj Materials Degradation, Vol 6, Iss 1, Pp 1-16 (2022)
Abstract This work provides a data-oriented overview of the rapidly growing research field covering machine learning (ML) applied to predicting electrochemical corrosion. Our main aim was to determine which ML models have been applied and how well th
Externí odkaz:
https://doaj.org/article/ca7eae315269463ab1263934edbe3b57
Author Correction: Reviewing machine learning of corrosion prediction in a data-oriented perspective
Autor:
Leonardo Bertolucci Coelho, Dawei Zhang, Yves Van Ingelgem, Denis Steckelmacher, Ann Nowé, Herman Terryn
Publikováno v:
npj Materials Degradation, Vol 6, Iss 1, Pp 1-1 (2022)
Externí odkaz:
https://doaj.org/article/2a9dc51b47d448d8b1d6bd761baba9e9
Autor:
Gaoyuan Liu, Joris de Winter, Denis Steckelmacher, Roshan Kumar Hota, Ann Nowe, Bram Vanderborght
obotic manipulation in cluttered environments requires synergistic planning among prehensile and non-prehensile actions. Previous works on sampling-based Task and Motion Planning (TAMP) algorithms, e.g. PDDLStream, provide a fast and generalizable so
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1915e021511d180e35a31f741b650696
https://doi.org/10.1109/lra.2023.3261708
https://doi.org/10.1109/lra.2023.3261708
We propose a novel multi-objective reinforcement learning algorithm that successfully learns the optimal policy even for non-linear utility functions. Non-linear utility functions pose a challenge for SOTA approaches, both in terms of learning effici
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5f4b557c72bbc412a31cae173056e54f
https://biblio.vub.ac.be/vubir/actorcritic-multiobjective-reinforcement-learning-for-nonlinear-utility-functions(f93c40c2-c2c6-4d03-8f87-0f6efda010a7).html
https://biblio.vub.ac.be/vubir/actorcritic-multiobjective-reinforcement-learning-for-nonlinear-utility-functions(f93c40c2-c2c6-4d03-8f87-0f6efda010a7).html
Autor:
Jeroen Willems, Kerem Eryilmaz, Denis Steckelmacher, Bruno Depraetere, Rian Beck, Abdellatif Bey-Temsamani, Jan Helsen, Ann Nowe
This paper proposes a method to provide a good initialization of control parameters to be found when performing manual or automated control tuning during development, commissioning or periodic retuning. The method is based on treating the initializat
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::f7ec7315fad0570b40c93e9db6e75b67
https://hdl.handle.net/20.500.14017/8be5451e-e72a-47a4-80ff-1f75f6f714f9
https://hdl.handle.net/20.500.14017/8be5451e-e72a-47a4-80ff-1f75f6f714f9
Publikováno v:
Communications in Computer and Information Science ISBN: 9783030938413
The deployment of Reinforcement Learning (RL) on physical robots still stumbles on several challenges, such as sample-efficiency, safety, reproducibility, cost, and software platforms. In this paper, we introduce MoveRL, an environment that exposes a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::dc8820bf7e4597eaf3db4a20429d838d
https://doi.org/10.1007/978-3-030-93842-0_14
https://doi.org/10.1007/978-3-030-93842-0_14
Publikováno v:
Vrije Universiteit Brussel
Roijers, D M, Steckelmacher, D & Nowé, A 2020, ' Multi-objective reinforcement learning for the expected utility of the return ', Paper presented at 2018 Adaptive Learning Agents, ALA 2018-Co-located Workshop at the Federated AI Meeting, FAIM 2018, Stockholm, Sweden, 14/07/18-15/07/18 .
Vrije Universiteit Amsterdam
2018 Adaptive Learning Agents, ALA 2018-Co-located Workshop at the Federated AI Meeting, FAIM 2018
Roijers, D M, Steckelmacher, D & Nowé, A 2020, ' Multi-objective reinforcement learning for the expected utility of the return ', Paper presented at 2018 Adaptive Learning Agents, ALA 2018-Co-located Workshop at the Federated AI Meeting, FAIM 2018, Stockholm, Sweden, 14/07/18-15/07/18 .
Vrije Universiteit Amsterdam
2018 Adaptive Learning Agents, ALA 2018-Co-located Workshop at the Federated AI Meeting, FAIM 2018
Real-world decision problems often have multiple, possibly conflicting, objectives. In multi-objective reinforcement learning, the effects of actions in terms of these objectives must be learned by interacting with an environment. Typically, multi-ob
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::19d2151881af9b1c14b4fb63d8826404
http://www.scopus.com/inward/record.url?scp=85086287094&partnerID=8YFLogxK
http://www.scopus.com/inward/record.url?scp=85086287094&partnerID=8YFLogxK
Autor:
Arnau Dillen, Denis Steckelmacher, Kyriakos Efthymiadis, Kevin Langlois, Albert De Beir, Uros Marusic, Bram Vanderborght, Ann Nowé, Romain Meeusen, Fakhreddine Ghaffari, Olivier Romain, Kevin De Pauw
Publikováno v:
Journal of Neural Engineering. 19:011003
Objective. Biosignal control is an interaction modality that allows users to interact with electronic devices by decoding the biological signals emanating from the movements or thoughts of the user. This manner of interaction with devices can enhance
Publikováno v:
Scopus-Elsevier
Vrije Universiteit Brussel
Vrije Universiteit Brussel
Sample-efficiency is crucial in reinforcement learning tasks, especially when a large number of similar yet distinct tasks have to be learned. For example, consider a smart wheelchair learning to exit many differently-furnished offices on a building
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::0cf0fecb5c66e6c47b750e617f2c33b6
https://biblio.vub.ac.be/vubir/transfer-reinforcement-learning-across-environment-dynamics-with-multiple-advisors(49b4f0e2-4ff6-401a-8cf3-9a6504f822e5).html
https://biblio.vub.ac.be/vubir/transfer-reinforcement-learning-across-environment-dynamics-with-multiple-advisors(49b4f0e2-4ff6-401a-8cf3-9a6504f822e5).html
Publikováno v:
Vrije Universiteit Brussel
For a robot to learn a good policy, it often requires expensive equipment (such as sophisticated sensors) and a prepared training environment conducive to learning. However, it is seldom possible to perfectly equip robots for economic reasons, nor to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::197c8193825036d9402e9e37f28b5cc8
https://biblio.vub.ac.be/vubir/transfer-learning-across-simulated-robots-with-different-sensors(afa202d7-3def-4f96-8c9a-8e0aa7071f0c).html
https://biblio.vub.ac.be/vubir/transfer-learning-across-simulated-robots-with-different-sensors(afa202d7-3def-4f96-8c9a-8e0aa7071f0c).html