Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Trisha Mittal"'
Publikováno v:
Scientific Reports, Vol 14, Iss 1, Pp 1-13 (2024)
Abstract Increasing use of social media has resulted in many detrimental effects in youth. With very little control over multimodal content consumed on these platforms and the false narratives conveyed by these multimodal social media postings, such
Externí odkaz:
https://doaj.org/article/0c52c04d57d849d4a1f6976804eee08e
Publikováno v:
IEEE MultiMedia. 28:67-75
We present a learning model for multimodal context-aware emotion recognition. Our approach combines multiple human co-occurring modalities (such as facial, audio, textual, and pose/gaits) and two interpretations of context. To gather and encode backg
Autor:
Trisha Mittal, Tianrui Guan, Dinesh Manocha, Uttaran Bhattacharya, Srujan Panuganti, Rohan Chandra, Aniket Bera
Publikováno v:
IEEE Robotics and Automation Letters. 5:4882-4890
We present a novel approach for traffic forecasting in urban traffic scenarios using a combination of spectral graph analysis and deep learning. We predict both the low-level information (future trajectories) as well as the high-level information (ro
Autor:
Tanmay Randhavane, Aniket Bera, Uttaran Bhattacharya, Trisha Mittal, Dinesh Manocha, Rohan Chandra
Publikováno v:
AAAI
We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly
Publikováno v:
AAAI
We present M3ER, a learning-based method for emotion recognition from multiple input modalities. Our approach combines cues from multiple co-occurring modalities (such as face, text, and speech) and also is more robust than other methods to sensor no
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence. 42:221-231
The ability of intelligent agents to play games in human-like fashion is popularly considered a benchmark of progress in Artificial Intelligence. In our work, we introduce the first computational model aimed at Pictionary, the popular word-guessing s
Autor:
Trisha Mittal, Vishy Swaminathan, Somdeb Sarkhel, Ritwik Sinha, David Arbour, Saayan Mitra, Dinesh Manocha
Publikováno v:
2021 IEEE International Symposium on Multimedia (ISM).
Publikováno v:
CVPR
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content. Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors. We use the ideas
Publikováno v:
ICASSP
We present a new approach, that we call AdaGTCN, for identifying human reader intent from Electroencephalogram~(EEG) and Eye movement~(EM) data in order to help differentiate between normal reading and task-oriented reading. Understanding the physiol
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8cbb08b529fb1a558cf3a40e25da5003
http://arxiv.org/abs/2102.11922
http://arxiv.org/abs/2102.11922
Autor:
Aniket Bera, Pooja Guhan, Niall L. Williams, Uttaran Bhattacharya, Dinesh Manocha, Nicholas Rewkowski, Trisha Mittal
Publikováno v:
ISMAR
We present a novel autoregression network to generate virtual agents that convey various emotions through their walking styles or gaits. Given the 3D pose sequences of a gait, our network extracts pertinent movement features and affective features fr