Zobrazeno 1 - 10
of 15
pro vyhledávání: '"Asil Kaan Bozcuoglu"'
Publikováno v:
PETRA
In this paper, we present a novel method to learn end-to-end visuomotor policies for robotic manipulators. The method computes state-action mappings in a supervised learning manner from video demonstrations and robot trajectories. We show that the ro
Publikováno v:
ICRA
Challenging manipulation tasks can be solved effectively by combining individual robot skills, which must be parameterized for the concrete physical environment and task at hand. This is time-consuming and difficult for human programmers, particularl
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::32779b4a22f4fabfe25e98e786a2044a
Publikováno v:
Information Storage ISBN: 9783030192617
Embodied intelligent agents that are equipped with sensors and actuators have unique characteristics and requirements regarding the storage, management, and usage of information. The goal is to perform intentional activities, within the perception-ac
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::7a795de3f84b43945c698d1c9253c4e9
https://doi.org/10.1007/978-3-030-19262-4_5
https://doi.org/10.1007/978-3-030-19262-4_5
Publikováno v:
IROS
As robots start to execute complex manipulation tasks, they are expected to improve their skill set over time as humans do. A prominent approach to accomplish this is having robots to keep models of their actions based on their experiences in order t
Autor:
Asil Kaan Bozcuoglu, Fereshta Yazdani, Michael Beetz, Nico Huebel, Sebastian Blumenthal, Herman Bruyninckx
Publikováno v:
Robotics and Autonomous Systems, 117, 80-91. Elsevier
Recently, advances in robotics’ technology and research focus on complex scenarios. In these scenarios, robots have to act and respond fast to situational demands. First, they require heterogeneous knowledge from various sources. Then, they need to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::42ee7c3b6feea4cfd293be61cf0f2354
https://research.tue.nl/nl/publications/df917f26-fc6e-4a89-bcbb-4090fea58b75
https://research.tue.nl/nl/publications/df917f26-fc6e-4a89-bcbb-4090fea58b75
Autor:
Fereshta Yazdani, Ferenc Balint-Benczedi, Andrei Haidu, Asil Kaan Bozcuoglu, Daniel Bebler, Gayane Kazhoyan, Michael Beetz, Mihai Pomarlan
Publikováno v:
IROS
With the advancements in robotic technology and the progress in human-robot interaction research, the interest in deploying mixed human-robot teams in rescue missions is increasing. Due to their complementary capabilities in terms of locomotion, visi
Publikováno v:
IROS
AI knowledge representation and reasoning methods consider actions to be blackboxes that abstract away from how they are executed. This abstract view does not suffice for the decision making capabilities required by robotic agents that are to accompl
Autor:
Simon Stelter, Michael Beetz, Gayane Kazhoyan, Masayuki Inaba, Yuki Furuta, Asil Kaan Bozcuoglu, Kei Okada
Publikováno v:
ICRA
To enable robots to perform human-level tasks flexibly in varying conditions, we need a mechanism that allows them to exchange knowledge between themselves for crowd-sourcing the knowledge gap problem. One approach to achieve this is to equip a cloud
Autor:
Georg Bartels, Andrei Haidu, Asil Kaan Bozcuoglu, Daniel Bessler, Michael Beetz, Mihai Pomarlan
Publikováno v:
ICRA
In this paper we present KnowRob2, a second generation knowledge representation and reasoning framework for robotic agents. KnowRob2 is an extension and partial redesign of KnowRob, currently one of the most advanced knowledge processing systems for
Publikováno v:
ROBOT 2017: Third Iberian Robotics Conference ISBN: 9783319708324
ROBOT (1)
ROBOT (1)
As robots are expected to accomplish human-level manipulation tasks, the demand for formal knowledge representation techniques and reasoning for robots increases dramatically. In this paper we describe how to make use of heterogeneous ontologies in s
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::5878d51e0154aea5de61f2139894da29
https://doi.org/10.1007/978-3-319-70833-1_34
https://doi.org/10.1007/978-3-319-70833-1_34