Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Dhananjaya Gowda"'
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Autor:
Othman Istaiteh, Yasmeen Kussad, Yahya Daqour, Maria Habib, Mohammad Habash, Dhananjaya Gowda
Publikováno v:
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Publikováno v:
IEEE Transactions on Cognitive and Developmental Systems. 13:875-884
Publikováno v:
Interspeech 2022.
Publikováno v:
IEEE Access, Vol 9, Pp 151631-151640 (2021)
Formant tracking is investigated in this study by using trackers based on dynamic programming (DP) and deep neural nets (DNNs). Using the DP approach, six formant estimation methods were first compared. The six methods include linear prediction (LP)
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c25112c4d64631d6828e00d65ef65f67
Autor:
Dhananjaya Gowda, Abhinav Garg, Jiyeon Kim, Mehul Kumar, Sachin Singh, Ashutosh Gupta, Ankur Kumar, Nauman Dawalatabad, Aman Maghan, Shatrughan Singh, Chanwoo Kim
Publikováno v:
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
Publikováno v:
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
Autor:
Ashutosh Gupta, Aditya Jayasimha, Aman Maghan, Shatrughan Singh, Dhananjaya Gowda, Chanwoo Kim
Publikováno v:
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
Autor:
Kwangyoun Kim, Shatrughan Singh, Dhananjaya Gowda, Ankur Kumar, Sachin K. Singh, Chanwoo Kim, Ashutosh Gupta
Publikováno v:
ICASSP
In this paper, we propose methods to compute confidence score on the predictions made by an end-to-end speech recognition model in a 2-pass framework. We use RNN-Transducer for a streaming model, and an attention-based decoder for the second pass mod
Publikováno v:
ICASSP
In this paper, we present a streaming end-to-end speech recognition model based on Monotonic Chunkwise Attention (MoCha) jointly trained with enhancement layers. Even though the MoCha attention enables streaming speech recognition with recognition ac
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e231315fbf66e999f7738eaf902917b0
http://arxiv.org/abs/2105.01254
http://arxiv.org/abs/2105.01254