Zobrazeno 1 - 10
of 13
pro vyhledávání: '"Aishwarya Agrawal"'
Publikováno v:
Nanomedicine (London, England). 17(20)
Nanourchins are multibranched nanoparticles with unique optical properties and surface spikes. Because of their unique properties, gold nanourchins have advantages over gold nanoparticles. The most used nanourchins are gold, tungsten, carbon, vanadiu
Publikováno v:
Current drug delivery.
Abstract: In the current era, the Transdermal delivery of bioactive molecules has become an area of research interest. The transdermal route of administration enables direct entry of bioactive molecules into the systemic circulation with better and e
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts.
Publikováno v:
International Journal of Computer Vision. 127:398-414
The problem of visual question answering (VQA) is of significant importance both as a challenging research question and for the rich set of applications it enables. In this context, however, inherent structure in our world and bias in our language te
Autor:
Aishwarya Agrawal, Yash Goyal, Gordon Christie, Dhruv Batra, Ankit Laddha, Stanislaw Antol, Kevin Kochersberger
Publikováno v:
Computer Vision and Image Understanding. 163:101-112
We present an approach to simultaneously perform semantic segmentation and prepositional phrase attachment resolution for captioned images. Some ambiguities in language cannot be resolved without simultaneously reasoning about an associated image. If
Autor:
Jiasen Lu, Margaret Mitchell, C. Lawrence Zitnick, Stanislaw Antol, Dhruv Batra, Aishwarya Agrawal, Devi Parikh
Publikováno v:
International Journal of Computer Vision. 123:4-31
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helpi
Publikováno v:
CVPR
A number of studies have found that today's Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared towards the latter,
Autor:
Siddharth S. Padhi, A. Sarkar, Ankur Mandal, Pratap K.J. Mohapatra, Anurag Chaudhary, Aishwarya Agrawal
Publikováno v:
Technology Operation Management. 3:17-31
Consider a situation where a buyer has to procure an item from outside suppliers and is faced with the decision whether to procure the item from a single supplier or from multiple suppliers. Supply risk has become, in the recent years, a key consider
Autor:
Aishwarya Agrawal, Ting-Hao Kenneth Huang, Pushmeet Kohli, C. Lawrence Zitnick, Nasrin Mostafazadeh, Ross Girshick, Jacob Devlin, Dhruv Batra, Lucy Vanderwende, Xiaodong He, Devi Parikh, Francis Ferraro, Margaret Mitchell, Ishan Misra, Michel Galley
Publikováno v:
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The first release of this dataset, SIND1 v.1, includes 81,743 unique photos in 20,211 sequences, aligned to b
Publikováno v:
EMNLP
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of the
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a212e99db2a14b6edfef47ab8bedb1d5