Zobrazeno 1 - 10
of 109
pro vyhledávání: '"Krishna Gautam"'
Autor:
Mohamed Jawed Ahsan, Krishna Gautam, Amena Ali, Abuzer Ali, Abdulmalik Saleh Alfawaz Altamimi, Salahuddin, Manal A. Alossaimi, S. V. V. N. S. M. Lakshmi, Md. Faiyaz Ahsan
Publikováno v:
Molecules, Vol 28, Iss 19, p 6936 (2023)
In the current study, we described the synthesis of ten new 5-(3-Bromophenyl)-N-aryl-4H-1,2,4-triazol-3-amine analogs (4a–j), as well as their characterization, anticancer activity, molecular docking studies, ADME, and toxicity prediction. The titl
Externí odkaz:
https://doaj.org/article/2f9df8cb9f7d4b3f97a62512684dcaf8
Autor:
Palaskar, Shruti, Rudovic, Oggi, Dharur, Sameer, Pesce, Florian, Krishna, Gautam, Sivaraman, Aswin, Berkowitz, Jack, Abdelaziz, Ahmed Hussen, Adya, Saurabh, Tewfik, Ahmed
Although Large Language Models (LLMs) have shown promise for human-like conversations, they are primarily pre-trained on text data. Incorporating audio or video improves performance, but collecting large-scale multimodal data and pre-training multimo
Externí odkaz:
http://arxiv.org/abs/2406.09617
Autor:
Krishna, Gautam, Dharur, Sameer, Rudovic, Oggi, Dighe, Pranay, Adya, Saurabh, Abdelaziz, Ahmed Hussen, Tewfik, Ahmed H
Device-directed speech detection (DDSD) is the binary classification task of distinguishing between queries directed at a voice assistant versus side conversation or background speech. State-of-the-art DDSD systems use verbal cues, e.g acoustic, text
Externí odkaz:
http://arxiv.org/abs/2310.15261
Autor:
Krishna, Gautam, Carnahan, Mason, Shamapant, Shilpa, Surendranath, Yashitha, Jain, Saumya, Ghosh, Arundhati, Tran, Co, Millan, Jose del R, Tewfik, Ahmed H
In this paper, we propose a deep learning-based algorithm to improve the performance of automatic speech recognition (ASR) systems for aphasia, apraxia, and dysarthria speech by utilizing electroencephalography (EEG) features recorded synchronously w
Externí odkaz:
http://arxiv.org/abs/2103.00383
In this paper, we demonstrate speech recognition using electroencephalography (EEG) signals obtained using dry electrodes on a limited English vocabulary consisting of three vowels and one word using a deep learning model. We demonstrate a test accur
Externí odkaz:
http://arxiv.org/abs/2008.07621
In this paper we introduce a recurrent neural network (RNN) based variational autoencoder (VAE) model with a new constrained loss function that can generate more meaningful electroencephalography (EEG) features from raw EEG features to improve the pe
Externí odkaz:
http://arxiv.org/abs/2006.02902
The electroencephalography (EEG) signals recorded in parallel with speech are used to perform isolated and continuous speech recognition. During speaking process, one also hears his or her own speech and this speech perception is also reflected in th
Externí odkaz:
http://arxiv.org/abs/2006.01261
In [1,2] authors provided preliminary results for synthesizing speech from electroencephalography (EEG) features where they first predict acoustic features from EEG features and then the speech is reconstructed from the predicted acoustic features us
Externí odkaz:
http://arxiv.org/abs/2006.01262
In this paper we demonstrate that it is possible to generate more meaningful electroencephalography (EEG) features from raw EEG features using generative adversarial networks (GAN) to improve the performance of EEG based continuous speech recognition
Externí odkaz:
http://arxiv.org/abs/2006.01260
In this paper we explore predicting facial or lip video features from electroencephalography (EEG) features and predicting EEG features from recorded facial or lip video frames using deep learning models. The subjects were asked to read out loud Engl
Externí odkaz:
http://arxiv.org/abs/2005.11235