Zobrazeno 1 - 10
of 6 406
pro vyhledávání: '"Alikhani A"'
This paper presents the Large Wireless Model (LWM) -- the world's first foundation model for wireless channels. Designed as a task-agnostic model, LWM generates universal, rich, contextualized channel embeddings (features) that potentially enhance pe
Externí odkaz:
http://arxiv.org/abs/2411.08872
Effective human-machine collaboration requires machine learning models to externalize uncertainty, so users can reflect and intervene when necessary. For language models, these representations of uncertainty may be impacted by sycophancy bias: procli
Externí odkaz:
http://arxiv.org/abs/2410.14746
Autor:
Sicilia, Anthony, Alikhani, Malihe
Conversation forecasting tasks a model with predicting the outcome of an unfolding conversation. For instance, it can be applied in social media moderation to predict harmful user behaviors before they occur, allowing for preventative interventions.
Externí odkaz:
http://arxiv.org/abs/2410.14744
When assisting people in daily tasks, robots need to accurately interpret visual cues and respond effectively in diverse safety-critical situations, such as sharp objects on the floor. In this context, we present M-CoDAL, a multimodal-dialogue system
Externí odkaz:
http://arxiv.org/abs/2410.14141
Autor:
Cheng, Qi, İnan, Mert, Mbarki, Rahma, Grmek, Grace, Choi, Theresa, Sun, Yiming, Persaud, Kimele, Wang, Jenny, Alikhani, Malihe
Understanding uncertainty plays a critical role in achieving common ground (Clark et al.,1983). This is especially important for multimodal AI systems that collaborate with users to solve a problem or guide the user through a challenging concept. In
Externí odkaz:
http://arxiv.org/abs/2410.14050
We introduce a goal-oriented conversational AI system enhanced with American Sign Language (ASL) instructions, presenting the first implementation of such a system on a worldwide multimodal conversational AI platform. Accessible through a touch-based
Externí odkaz:
http://arxiv.org/abs/2410.14026
Ensuring that Large Language Models (LLMs) generate text representative of diverse sub-populations is essential, particularly when key concepts related to under-represented groups are scarce in the training data. We address this challenge with a nove
Externí odkaz:
http://arxiv.org/abs/2410.13641
Ensuring robust safety measures across a wide range of scenarios is crucial for user-facing systems. While Large Language Models (LLMs) can generate valuable data for safety measures, they often exhibit distributional biases, focusing on common scena
Externí odkaz:
http://arxiv.org/abs/2410.11114
Ensuring that the benefits of sign language technologies are distributed equitably among all community members is crucial. Thus, it is important to address potential biases and inequities that may arise from the design or use of these resources. Crow
Externí odkaz:
http://arxiv.org/abs/2410.05206
Autor:
Sicilia, Anthony, Alikhani, Malihe
Typically, when evaluating Theory of Mind, we consider the beliefs of others to be binary: held or not held. But what if someone is unsure about their own beliefs? How can we quantify this uncertainty? We propose a new suite of tasks, challenging lan
Externí odkaz:
http://arxiv.org/abs/2409.14986