Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Yildirim, Yigit"'
Autor:
Dogangun, Fatih, Bahar, Serdar, Yildirim, Yigit, Temir, Bora Toprak, Ugur, Emre, Dogan, Mustafa Doga
As robotics continue to enter various sectors beyond traditional industrial applications, the need for intuitive robot training and interaction systems becomes increasingly more important. This paper introduces Robotic Augmented Reality for Machine P
Externí odkaz:
http://arxiv.org/abs/2410.13412
Autor:
Yildirim, Yigit, Ugur, Emre
Traditional path-planning techniques treat humans as obstacles. This has changed since robots started to enter human environments. On modern robots, social navigation has become an important aspect of navigation systems. To use learning-based techniq
Externí odkaz:
http://arxiv.org/abs/2404.11246
Trustworthiness is a crucial concept in the context of human-robot interaction. Cooperative robots must be transparent regarding their decision-making process, especially when operating in a human-oriented environment. This paper presents a comprehen
Externí odkaz:
http://arxiv.org/abs/2404.04069
Socially compliant navigation is an integral part of safety features in Human-Robot Interaction. Traditional approaches to mobile navigation prioritize physical aspects, such as efficiency, but social behaviors gain traction as robots appear more in
Externí odkaz:
http://arxiv.org/abs/2403.15813
Autor:
Yildirim, Yigit, Ugur, Emre
Learning from Demonstration (LfD) is a widely used technique for skill acquisition in robotics. However, demonstrations of the same skill may exhibit significant variances, or learning systems may attempt to acquire different means of the same skill
Externí odkaz:
http://arxiv.org/abs/2402.08424
Autor:
Yildirim, Yigit, Ugur, Emre
Sociability is essential for modern robots to increase their acceptability in human environments. Traditional techniques use manually engineered utility functions inspired by observing pedestrian behaviors to achieve social navigation. However, socia
Externí odkaz:
http://arxiv.org/abs/2210.03582
Multi-armed bandits (MAB) is a sequential decision-making model in which the learner controls the trade-off between exploration and exploitation to maximize its cumulative reward. Federated multi-armed bandits (FMAB) is an emerging framework where a
Externí odkaz:
http://arxiv.org/abs/2205.04134
Autor:
Yildirim, Yigit, Ugur, Emre
Publikováno v:
Interaction Studies; 2023, Vol. 24 Issue 3, p427-468, 42p
Multi-armed bandits (MAB) is a simple reinforcement learning model where the learner controls the trade-off between exploration versus exploitation to maximize its cumulative reward. Federated multi-armed bandits (FMAB) is a recently emerging framewo
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::48d381577b48e10300a99f9583b7e6e2
http://arxiv.org/abs/2205.04134
http://arxiv.org/abs/2205.04134