Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Yvette Graham"'
Publikováno v:
Entropy, Vol 24, Iss 11, p 1514 (2022)
Question Generation (QG) aims to automate the task of composing questions for a passage with a set of chosen answers found within the passage. In recent years, the introduction of neural generation models has resulted in substantial improvements of a
Externí odkaz:
https://doaj.org/article/701bc11e08014af79e40975db486076f
Publikováno v:
PLoS ONE, Vol 13, Iss 9, p e0202789 (2018)
We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive groun
Externí odkaz:
https://doaj.org/article/dfb055f536f947ea96873e4f936889a9
Autor:
Naushad Alam, Yvette Graham
Publikováno v:
Multimedia Tools and Applications.
In this extended paper, we describe our lifelog retrieval system called Memento which participated in the 2021 Lifelog Search Challenge in detail. Memento leverages semantic representations of images and textual queries projected into a common latent
Publikováno v:
2022 IEEE International Conference on Image Processing (ICIP).
Autor:
Ly-Duyen Tran, Naushad Alam, Yvette Graham, Linh Khanh Vo, Nghiem Tuong Diep, Binh Nguyen, Liting Zhou, Cathal Gurrin
Publikováno v:
Tran, Ly Duyen ORCID: 0000-0002-9597-1832 , Alam, Naushad ORCID: 0000-0002-3144-5622 , Vo, Linh Khanh, Diep, Nghiem Tuong, Nguyen, Binh ORCID: 0000-0001-5249-9702 , Graham, Yvette ORCID: 0000-0001-6741-4855 , Zhou, Liting ORCID: 0000-0002-7778-8743 and Gurrin, Cathal ORCID: 0000-0003-2903-3968 (2022) An Exploration into the Benefits of the CLIP model for Lifelog Retrieval. In: International Conference on Content-Based Multimedia Indexing, 14–16 Sept 2022, Graz, Austria. ISBN 978-1-4503-9720-9
In this paper, we attempt to fine-tune the CLIP (Contrastive Language-Image Pre-Training) model on the Lifelog Question Answering dataset (LLQA) to investigate retrieval performance of the fine-tuned model over the zero-shot baseline model. We train
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c5565ced15fb07aa8c03d09f741d144c
http://doras.dcu.ie/27842/
http://doras.dcu.ie/27842/
Publikováno v:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).
Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competit
Autor:
Yvette Graham
Publikováno v:
AI and the Future of Skills, Volume 1 ISBN: 9789264485303
This chapter details evaluation techniques in Natural Language Processing, a challenging sub-discipline of artificial intelligence (AI). It highlights proven methods to provide both fair and replicable results for evaluation of system performance, as
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::2b9c062b5fb7eda171e528031e490e55
https://doi.org/10.1787/fcd5e244-en
https://doi.org/10.1787/fcd5e244-en
Publikováno v:
COLING
Lyu, Chenyang, Foster, Jennifer ORCID: 0000-0002-7789-4853 and Graham, Yvette (2020) Improving document-level sentiment analysis with user and product context. In: Proceedings of the 28th International Conference on Computational Linguistics, 8-13 Dec 20, Barcelona, Spain (Online).
Lyu, Chenyang, Foster, Jennifer ORCID: 0000-0002-7789-4853
Past work that improves document-level sentiment analysis by encoding user and product information has been limited to considering only the text of the current review. We investigate incorporating additional review text available at the time of senti
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::a1533e69465e6fdd246d0002a4d90bca
http://arxiv.org/abs/2011.09210
http://arxiv.org/abs/2011.09210
Publikováno v:
CHIIR
Evaluation in non-factoid question answering tasks generally takes the form of computation of automatic metric scores for systems on a sample test set of questions against human-generated reference answers. Conclusions drawn from the scores produced
Publikováno v:
EMNLP (1)
The term translationese has been used to describe features of translated text, and in this paper, we provide detailed analysis of potential adverse effects of translationese on machine translation evaluation. Our analysis shows differences in conclus