Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Amin Fazel"'
Autor:
Roland Maas, Jasha Droppo, Roberto Barra-Chicote, Yixiong Meng, Amin Fazel, Wei Yang, Yulan Liu
End-to-end (E2E) automatic speech recognition (ASR) models have recently demonstrated superior performance over the traditional hybrid ASR models. Training an E2E ASR model requires a large amount of data which is not only expensive but may also rais
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7c7ab09241c6ea402622a7b43a7286d8
http://arxiv.org/abs/2106.07803
http://arxiv.org/abs/2106.07803
Publikováno v:
INTERSPEECH
Autor:
Shantanu Chakrabartty, Amin Fazel
Publikováno v:
IEEE Transactions on Audio, Speech, and Language Processing. 20:1362-1371
In this paper, we present a novel speech feature extraction algorithm based on a hierarchical combination of auditory similarity and pooling functions. The computationally efficient features known as “Sparse Auditory Reproducing Kernel” (SPARK) c
Publikováno v:
آموزش و ارزشیابی, Vol 4, Iss 13, Pp 79-94 (2011)
The main objective of the study was to investigate the relationship between learning styles and academic achievement with mediating of meta cognitive awareness among the university students’. The statistical population included 355 students’ of A
Autor:
Amin Fazel, Shantanu Chakrabartty
Publikováno v:
IEEE Circuits and Systems Magazine. 11:62-81
Even though the subject of speaker verification has been investigated for several decades, numerous challenges and new opportunities in robust recognition techniques are still being explored. In this overview paper we first provide a brief introducti
Publikováno v:
IEEE Transactions on Circuits and Systems I: Regular Papers. 57:783-792
Localization of acoustic sources using miniature microphone arrays poses a significant challenge due to fundamental limitations imposed by the physics of sound propagation. With sub-wavelength distances between the microphones, resolving acute locali
Publikováno v:
IEEE Transactions on Signal Processing. 58:1193-1204
Many source separation algorithms fail to deliver robust performance when applied to signals recorded using high-density sensor arrays where the distance between sensor elements is much less than the wavelength of the signals. This can be attributed
Autor:
Shantanu Chakrabartty, Amin Fazel
Publikováno v:
ISCAS
In this paper we present a novel speech feature extraction algorithm based on sparse auditory coding and regression techniques in a reproducing kernel Hilbert space (RKHS). The features known as sparse kernel cepstral coefficients (SKCC) are extracte
Autor:
Amin Fazel, Shantanu Chakrabartty
Publikováno v:
ISCAS
The performance of acoustic source separation algorithms significantly degrades when they applied to signals recorded using miniature microphone arrays where the distances between the microphone elements are much smaller than the wavelength of acoust
Autor:
Shantanu Chakrabartty, Amin Fazel
Publikováno v:
ISCAS
In this paper, we present a non-linear filtering approach for extracting noise-robust speech features that can be used in a speaker verification task. At the core of the proposed approach is a time-series regression using Reproducing Kernel Hilbert S