Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Malmir, Mohammadhossein"'
Autor:
Feng, Qian, Lema, David S. Martinez, Malmir, Mohammadhossein, Li, Hang, Feng, Jianxiang, Chen, Zhaopeng, Knoll, Alois
We introduce DexGanGrasp, a dexterous grasping synthesis method that generates and evaluates grasps with single view in real time. DexGanGrasp comprises a Conditional Generative Adversarial Networks (cGANs)-based DexGenerator to generate dexterous gr
Externí odkaz:
http://arxiv.org/abs/2407.17348
Autor:
Josifovski, Josip, Auddy, Sayantan, Malmir, Mohammadhossein, Piater, Justus, Knoll, Alois, Navarro-Guerrero, Nicolás
Domain Randomization (DR) is commonly used for sim2real transfer of reinforcement learning (RL) policies in robotics. Most DR approaches require a simulator with a fixed set of tunable parameters from the start of the training, from which the paramet
Externí odkaz:
http://arxiv.org/abs/2403.12193
Autor:
Petropoulakis, Panagiotis, Gräf, Ludwig, Malmir, Mohammadhossein, Josifovski, Josip, Knoll, Alois
Choosing an appropriate representation of the environment for the underlying decision-making process of the reinforcement learning agent is not always straightforward. The state representation should be inclusive enough to allow the agent to informat
Externí odkaz:
http://arxiv.org/abs/2309.11984
Delayed Markov decision processes fulfill the Markov property by augmenting the state space of agents with a finite time window of recently committed actions. In reliance with these state augmentations, delay-resolved reinforcement learning algorithm
Externí odkaz:
http://arxiv.org/abs/2306.09010
Autor:
Josifovski, Josip, Malmir, Mohammadhossein, Klarmann, Noah, Žagar, Bare Luka, Navarro-Guerrero, Nicolás, Knoll, Alois
Randomization is currently a widely used approach in Sim2Real transfer for data-driven learning algorithms in robotics. Still, most Sim2Real studies report results for a specific randomization technique and often on a highly customized robotic system
Externí odkaz:
http://arxiv.org/abs/2206.06282
This paper presents a novel hierarchical motion planning approach based on Rapidly-Exploring Random Trees (RRT) for global planning and Model Predictive Control (MPC) for local planning. The approach targets a three-wheeled cycle rickshaw (trishaw) u
Externí odkaz:
http://arxiv.org/abs/2103.06141
Autor:
Klarmann, Noah, Malmir, Mohammadhossein, Josifovski, Josip, Plorin, Daniel, Wagner, Matthias, Knoll, Alois
Publikováno v:
Artificial Intelligence for Digitising Industry – Applications ISBN: 9781003337232
This paper outlines the concept of optimising trajectories for industrial robots by applying deep reinforcement learning in simulations. An application of high technical relevance is considered in a production line of an automotive manufacturer (AUDI
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::01b328698954e10d58681894bf51c1c2
https://doi.org/10.1201/9781003337232-5
https://doi.org/10.1201/9781003337232-5
Autor:
Josifovski, Josip, Malmir, Mohammadhossein, Klarmann, Noah, Nica, Iulia, Wotawa, Franz, Klueck, Florian, Felbinger, Hermann, Trantidou, Tatiana, Marini, Eleftheria, Schneider, Mathias, Jokela, Tuomas, Chromý, Adam, Daskalopoulos, Ioannis, Trouva, Eleni, Poulakidas, Athanasios, Lucas, Peter
The present document is a deliverable of the AI4DI project, which is co-funded by the ECSEL Joint Undertaking under grant agreement No. 826060 and ECSEL JU Member States. This report gives an overview of the simulation and modelling approaches at the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1a1e3245f845641eeb13ecd5327f7fe7
This paper presents a novel hierarchical motion planning approach based on Rapidly-Exploring Random Trees (RRT) for global planning and Model Predictive Control (MPC) for local planning. The approach targets a three-wheeled cycle rickshaw (trishaw) u
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::18f73f4e7286726855c80f473fb2b7ab
http://arxiv.org/abs/2103.06141
http://arxiv.org/abs/2103.06141