Zobrazeno 1 - 10
of 62
pro vyhledávání: '"Tran, Co"'
Autor:
Yu, Mengjie, Harris, Dustin, Jones, Ian, Zhang, Ting, Liu, Yue, Sendhilnathan, Naveen, Kokhlikyan, Narine, Wang, Fulton, Tran, Co, Livingston, Jordan L., Taylor, Krista E., Hu, Zhenhong, Hood, Mary A., Benko, Hrvoje, Jonker, Tanya R.
Gaze-based interactions offer a potential way for users to naturally engage with mixed reality (XR) interfaces. Black-box machine learning models enabled higher accuracy for gaze-based interactions. However, due to the black-box nature of the model,
Externí odkaz:
http://arxiv.org/abs/2404.13777
Numerous approaches have been explored for graph clustering, including those which optimize a global criteria such as modularity. More recently, Graph Neural Networks (GNNs), which have produced state-of-the-art results in graph analysis tasks such a
Externí odkaz:
http://arxiv.org/abs/2308.09644
Autor:
Krishna, Gautam, Carnahan, Mason, Shamapant, Shilpa, Surendranath, Yashitha, Jain, Saumya, Ghosh, Arundhati, Tran, Co, Millan, Jose del R, Tewfik, Ahmed H
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously w
Externí odkaz:
http://arxiv.org/abs/2103.00383
In this paper, we demonstrate speech recognition using electroencephalography (EEG) signals obtained using dry electrodes on a limited English vocabulary consisting of three vowels and one word using a deep learning model. We demonstrate a test accur
Externí odkaz:
http://arxiv.org/abs/2008.07621
In this paper we introduce a recurrent neural network (RNN) based variational autoencoder (VAE) model with a new constrained loss function that can generate more meaningful electroencephalography (EEG) features from raw EEG features to improve the pe
Externí odkaz:
http://arxiv.org/abs/2006.02902
The electroencephalography (EEG) signals recorded in parallel with speech are used to perform isolated and continuous speech recognition. During speaking process, one also hears his or her own speech and this speech perception is also reflected in th
Externí odkaz:
http://arxiv.org/abs/2006.01261
In [1,2] authors provided preliminary results for synthesizing speech from electroencephalography (EEG) features where they first predict acoustic features from EEG features and then the speech is reconstructed from the predicted acoustic features us
Externí odkaz:
http://arxiv.org/abs/2006.01262
In this paper we demonstrate that it is possible to generate more meaningful electroencephalography (EEG) features from raw EEG features using generative adversarial networks (GAN) to improve the performance of EEG based continuous speech recognition
Externí odkaz:
http://arxiv.org/abs/2006.01260
In this paper we explore predicting facial or lip video features from electroencephalography (EEG) features and predicting EEG features from recorded facial or lip video frames using deep learning models. The subjects were asked to read out loud Engl
Externí odkaz:
http://arxiv.org/abs/2005.11235
In this paper we introduce attention-regression model to demonstrate predicting acoustic features from electroencephalography (EEG) features recorded in parallel with spoken sentences. First we demonstrate predicting acoustic features directly from E
Externí odkaz:
http://arxiv.org/abs/2004.04731