Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Burachas, Giedrius"'
Self-supervised learning methods overcome the key bottleneck for building more capable AI: limited availability of labeled data. However, one of the drawbacks of self-supervised architectures is that the representations that they learn are implicit a
Externí odkaz:
http://arxiv.org/abs/2207.02972
Autor:
Alipour, Kamran, Ray, Arijit, Lin, Xiao, Cogswell, Michael, Schulze, Jurgen P., Yao, Yi, Burachas, Giedrius T.
In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs. In this work, we show that show
Externí odkaz:
http://arxiv.org/abs/2110.06863
Autor:
Ray, Arijit, Cogswell, Michael, Lin, Xiao, Alipour, Kamran, Divakaran, Ajay, Yao, Yi, Burachas, Giedrius
Attention maps, a popular heatmap-based explanation method for Visual Question Answering (VQA), are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that us
Externí odkaz:
http://arxiv.org/abs/2103.14712
Few-Shot Learning (FSL) aims to improve a model's generalization capability in low data regimes. Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc. However, FSL remains challenging due to the
Externí odkaz:
http://arxiv.org/abs/2011.10082
Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability
Externí odkaz:
http://arxiv.org/abs/2007.00900
Publikováno v:
Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020) co-located with 34th AAAI Conference on Artificial Intelligence (AAAI 2020), New York, USA, Feb 7, 2020
Explainability and interpretability of AI models is an essential factor affecting the safety of AI. While various explainable AI (XAI) approaches aim at mitigating the lack of transparency in deep networks, the evidence of the effectiveness of these
Externí odkaz:
http://arxiv.org/abs/2003.00431
While models for Visual Question Answering (VQA) have steadily improved over the years, interacting with one quickly reveals that these models lack consistency. For instance, if a model answers "red" to "What color is the balloon?", it might answer "
Externí odkaz:
http://arxiv.org/abs/1909.04696
Publikováno v:
2019 AAAI Conference on Human Computation and Crowdsourcing
While there have been many proposals on making AI algorithms explainable, few have attempted to evaluate the impact of AI-generated explanations on human performance in conducting human-AI collaborative tasks. To bridge the gap, we propose a Twenty-Q
Externí odkaz:
http://arxiv.org/abs/1904.03285
In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i.e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem. We generate NL explanations comprising of the eviden
Externí odkaz:
http://arxiv.org/abs/1902.05715
Publikováno v:
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
Few-Shot Learning (FSL) aims to improve a model's generalization capability in low data regimes. Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc. However, FSL remains challenging due to the