Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Don Joven Agravante"'
Autor:
Ryuki Tachibana, Guillaume Le Moing, Jayakorn Vongkulbhisal, Tadanobu Inoue, Don Joven Agravante, Phongtharin Vinayavekhin, Asim Munawar
Publikováno v:
ICASSP
Deep neural networks have recently led to promising results for the task of multiple sound source localization. Yet, they require a lot of training data to cover a variety of acoustic conditions and microphone array layouts. One can leverage acoustic
Autor:
Jayakorn Vongkulbhisal, Ryuki Tachibana, Tadanobu Inoue, Don Joven Agravante, Guillaume Le Moing, Phongtharin Vinayavekhin, Asim Munawar
Publikováno v:
MMSP
In this paper, we propose novel deep learning based algorithms for multiple sound source localization. Specifically, we aim to find the 2D Cartesian coordinates of multiple sound sources in an enclosed environment by using multiple microphone arrays.
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::f06a403e696680c7b1a73f49841b9176
http://arxiv.org/abs/2012.05515
http://arxiv.org/abs/2012.05515
Autor:
Don Joven Agravante, Abderrahmane Kheddar, Alexander Sherikov, Andrea Cherubini, Pierre-Brice Wieber
Publikováno v:
IEEE Transactions on Robotics
IEEE Transactions on Robotics, IEEE, In press, 35 (4), pp.833-846. ⟨10.1109/TRO.2019.2914350⟩
IEEE Transactions on Robotics, inPress, 35 (4), pp.833-846. ⟨10.1109/TRO.2019.2914350⟩
IEEE Transactions on Robotics, IEEE, In press, 35 (4), pp.833-846. ⟨10.1109/TRO.2019.2914350⟩
IEEE Transactions on Robotics, inPress, 35 (4), pp.833-846. ⟨10.1109/TRO.2019.2914350⟩
International audience; This paper contributes to the field of physical human-robot collaboration. We present a complete control framework, which aims at making humanoid robots capable of carrying objects together with humans. First, we design a temp
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::214f16778897626a4575b4ee30af2982
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01311154v3/file/tro-agravante_revised.pdf
https://hal-lirmm.ccsd.cnrs.fr/lirmm-01311154v3/file/tro-agravante_revised.pdf
Autor:
Ryuki Tachibana, Phongtharin Vinayavekhin, Daiki Kimura, Giovanni De Magistris, Subhajit Chaudhury, Don Joven Agravante, Asim Munawar
Publikováno v:
ICPR
This paper is a contribution towards interpretability of the deep learning models in different applications of time-series. We propose a temporal attention layer that is capable of selecting the relevant information to perform various tasks, includin
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6dfa94bb927438b9b8315e2802d86416
Publikováno v:
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'17
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'17, Sep 2017, Vancouver, Canada. pp.2947-2952
IROS
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'17, Sep 2017, Vancouver, Canada. pp.2947-2952
IROS
International audience; In active vision, the camera motion is controlled in order to improve a certain visual sensing strategy. In this paper, we formulate an active vision task function to improve pose estimation. This is done by defining an optima
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::89171a709a40217178507963049cf77d
https://hal.inria.fr/hal-01589882
https://hal.inria.fr/hal-01589882
Publikováno v:
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters, 2017, 2 (2), pp.608-615. ⟨10.1109/lra.2016.2645512⟩
IEEE Robotics and Automation Letters, IEEE 2017, 2 (2), pp.608-615. ⟨10.1109/lra.2016.2645512⟩
IEEE Robotics and Automation Letters, 2017, 2 (2), pp.608-615. ⟨10.1109/lra.2016.2645512⟩
IEEE Robotics and Automation Letters, IEEE 2017, 2 (2), pp.608-615. ⟨10.1109/lra.2016.2645512⟩
In this paper, we show that visual servoing can be formulated as an acceleration-resolved, quadratic optimization problem. This allows us to handle visual constraints, such as field of view and occlusion avoidance, as inequalities. Furthermore, it al
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d8b7114fbd299982cb8fe4874ec1a35d
https://inria.hal.science/hal-01421734/document
https://inria.hal.science/hal-01421734/document
Autor:
Pierre-Brice Wieber, Don Joven Agravante, Alexander Sherikov, Abderrahmane Kheddar, Andrea Cherubini
Publikováno v:
ICRA
33rd IEEE International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation, May 2016, Stockholm, Sweden. pp.1573-1578, ⟨10.1109/ICRA.2016.7487296⟩
33rd IEEE International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation, May 2016, Stockholm, Sweden. pp.1573-1578, ⟨10.1109/ICRA.2016.7487296⟩
International audience; This paper is about the design of humanoid walking pattern generators to be used for physical collaboration. A particular use case is a humanoid robot helping a human to carry large and/or heavy objects. To do this, we constru
Publikováno v:
31st IEEE International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation, May 2014, Hong Kong, China. pp.607-612, ⟨10.1109/ICRA.2014.6906917⟩
ICRA
ICRA: International Conference on Robotics and Automation
ICRA: International Conference on Robotics and Automation, May 2014, Hong Kong, China. pp.607-612, ⟨10.1109/ICRA.2014.6906917⟩
ICRA
International audience; We propose a framework for combining vision and haptic information in human-robot joint actions. It consists of a hybrid controller that uses both visual servoing and impedance controllers. This can be applied to tasks that ca
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::10f9b45c7e09f8567cc9810e30b9c5b4
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00950727/file/2014_icra_agravante-collaborative_human_humanoid_carrying_using_vision.pdf
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00950727/file/2014_icra_agravante-collaborative_human_humanoid_carrying_using_vision.pdf
Publikováno v:
6th Conference on Robotics, Automation and Mechatronics
RAM: Robotics, Automation and Mechatronics
RAM: Robotics, Automation and Mechatronics, Nov 2013, Manila, Philippines. pp.13-18, ⟨10.1109/RAM.2013.6758552⟩
RAM
RAM: Robotics, Automation and Mechatronics
RAM: Robotics, Automation and Mechatronics, Nov 2013, Manila, Philippines. pp.13-18, ⟨10.1109/RAM.2013.6758552⟩
RAM
International audience; Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::152826dbc26030233456525fdf49eb04
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00908439/document
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00908439/document
Publikováno v:
IROS
International Conference on Intelligent Robots and Systems
IROS: Intelligent Robots and Systems
IROS: Intelligent Robots and Systems, Nov 2013, Tokyo, Japan. ⟨10.1109/IROS.2013.6697019⟩
International Conference on Intelligent Robots and Systems
IROS: Intelligent Robots and Systems
IROS: Intelligent Robots and Systems, Nov 2013, Tokyo, Japan. ⟨10.1109/IROS.2013.6697019⟩
International audience; In this paper, a first step is taken towards using vision in human-humanoid haptic joint actions. Haptic joint actions are characterized by physical interaction throughout the execution of a common goal. Because of this, most