Zobrazeno 1 - 10
of 361
pro vyhledávání: '"Tatsuya Kawahara"'
Publikováno v:
APSIPA Transactions on Signal and Information Processing, Vol 13, Iss 2 (2024)
Externí odkaz:
https://doaj.org/article/d1df4d6a860b49c0a4785877ecad4d69
Publikováno v:
Frontiers in Robotics and AI, Vol 9 (2022)
Spoken dialogue systems must be able to express empathy to achieve natural interaction with human users. However, laughter generation requires a high level of dialogue understanding. Thus, implementing laughter in existing systems, such as in convers
Externí odkaz:
https://doaj.org/article/293894bfa6184416a003725e38bfe236
Autor:
Masato Mimura, Tatsuya Kawahara
Publikováno v:
Journal of Natural Language Processing. 30:88-124
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Time-domain speech enhancement (SE) has recently been intensively investigated. Among recent works, DEMUCS introduces multi-resolution STFT loss to enhance performance. However, some resolutions used for STFT contain non-stationary signals, and it is
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ccf16f318c418cd271eff9d4a4689e62
http://arxiv.org/abs/2303.14593
http://arxiv.org/abs/2303.14593
Publikováno v:
IEEE Signal Processing Letters. 29:927-931
Autor:
Kouhei Sekiguchi, Yoshiaki Bando, Aditya Arie Nugraha, Mathieu Fontaine, Kazuyoshi Yoshii, Tatsuya Kawahara
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing. 30:2368-2382
I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue
Autor:
Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Toshinori Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai
Publikováno v:
Li, Y, Inoue, K, Tian, L, Fu, C, Ishi, C, Ishiguro, H, Kawahara, T & Lai, C 2023, I Know Your Feelings Before You Do : Predicting Future Affective Reactions in Human-Computer Dialogue . in The ACM CHI Conference on Human Factors in Computing Systems ., 166, pp. 1-7, Computer Human Interaction (CHI) 2023, Hamburg, Germany, 23/04/23 . https://doi.org/10.1145/3544549.3585869
Current Spoken Dialogue Systems (SDSs) often serve as passive listeners that respond only after receiving user speech. To achieve human-like dialogue, we propose a novel future prediction architecture that allows an SDS to anticipate future affective
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::df0cc62bd85cf4f584dcf0ef8050b7e3
http://arxiv.org/abs/2303.00146
http://arxiv.org/abs/2303.00146
Publikováno v:
Proceedings of the 10th International Conference on Human-Agent Interaction.
Publikováno v:
Acoustical Science and Technology. 42:333-343