Zobrazeno 1 - 10
of 33
pro vyhledávání: '"Yutaka Takase"'
Autor:
Yutaka Takase, Kimitoshi Yamazaki
Publikováno v:
ROBOMECH Journal, Vol 11, Iss 1, Pp 1-10 (2024)
Abstract This study aimed to develop daily living support robots for patients with hemiplegia and the elderly. To support the daily living activities using robots in ordinary households without imposing physical and mental burdens on users, the syste
Externí odkaz:
https://doaj.org/article/903dbc061d0048c68190cf4cc52fcb7a
Publikováno v:
ROBOMECH Journal, Vol 10, Iss 1, Pp 1-14 (2023)
Abstract In this study, we describe a measurement system aiming to skill analysis of wall painting work using a roller brush. Our proposed measurement system mainly comprises an RGB-D sensor and a roller brush with sensors attached. To achieve our re
Externí odkaz:
https://doaj.org/article/e945142b71ca4edab081c446d55b0c39
Publikováno v:
Advanced Robotics. :1-14
Robotic System for Assisting Long-sleeved Shirt Dressing Using Two Manipulators with Different Roles
Publikováno v:
2023 IEEE/SICE International Symposium on System Integration (SII).
Publikováno v:
2022 IEEE International Conference on Mechatronics and Automation (ICMA).
Publikováno v:
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec). 2022:2P2-I02
Autor:
Wakana Taguchi, Fumio Nihei, Hiroko Akatsu, Shinichi Fukasawa, Yukiko I. Nakano, Yutaka Takase
Publikováno v:
ICMI (adjunct)
This research investigates the effectiveness of speech audio and facial image deformation tools that make conversation participants appear more positive. By conducting an experiment, we revealed that participants' feelings became more positive when u
Publikováno v:
GIFT@ICMI
Automatic meeting summarization would reduce the cost of producing minutes during or after a meeting. With the goal of establishing a method for extractive meeting summarization, we propose a multimodal fusion model that identifies the important utte
Publikováno v:
ACM Transactions on Interactive Intelligent Systems. 5:1-23
Gaze is an important nonverbal feedback signal in multiparty face-to-face conversations. It is well known that gaze behaviors differ depending on participation role: speaker, addressee, or side participant. In this study, we focus on dominance as ano
Publikováno v:
ICMI
This study proposes the use of multimodal fusion models employing Convolutional Neural Networks (CNNs) to extract meeting minutes from group discussion corpus. First, unimodal models are created using raw behavioral data such as speech, head motion,