Zobrazeno 1 - 10
of 66
pro vyhledávání: '"Masahito Togami"'
Autor:
Masahito TOGAMI
Publikováno v:
IEICE ESS Fundamentals Review. 16:257-271
Publikováno v:
IEEE Signal Processing Letters. 29:927-931
Publikováno v:
Interspeech 2021.
Autor:
Robin Scheibler, Masahito Togami
Publikováno v:
Interspeech 2021.
Publikováno v:
2021 29th European Signal Processing Conference (EUSIPCO).
Autor:
Masahito Togami
Publikováno v:
ICASSP
In this paper, we propose a dereverberation and speech source separation method based on deep neural network (DNN). Unlike the cascade connection of dereverberation and speech source separation, the proposed method performs dereverberation and speech
Publikováno v:
Human Robot Interaction
Automatic Speech Recognition (ASR) is an essential function of robots which live in the human world. Many works for ASR have been done for a long time. As a result, computers can recognize human speech well under silent environments. However, accurac
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d60bcc9540ca4f861a939b171798cbc9
http://www.intechopen.com/articles/show/title/automatic_speech_recognition_of_human-symbiotic_robot_emiew
http://www.intechopen.com/articles/show/title/automatic_speech_recognition_of_human-symbiotic_robot_emiew
Publikováno v:
ICASSP
We propose a new algorithm for joint dereverberation and blind source separation (DR-BSS). Our work builds upon the IRLMA-T framework that applies a unified filter combining dereverberation and separation. One drawback of this framework is that it re
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2975912b1b04331982b2d91946f79470
http://arxiv.org/abs/2102.06322
http://arxiv.org/abs/2102.06322
Autor:
Masahito Togami
Publikováno v:
EUSIPCO
In this paper, we propose a multi-channel speech source separation technique which connects an unsupervised spatial filtering without a deep neural network (DNN) to a DNN-based speech source separation in a cascade manner. In the speech source separa
Publikováno v:
EUSIPCO
This paper proposes robust acoustic scene classification (ASC) to multiple devices using maximum classifier discrepancy (MCD) and knowledge distillation (KD). The proposed method employs domain adaptation to train multiple ASC models dedicated to eac