Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Mattia Soldan"'
Autor:
Mattia Soldan, Alejandro Pardo, Juan Leon Alcazar, Fabian Caba Heilbron, Chen Zhao, Silvio Giancola, Bernard Ghanem
The recent and increasing interest in video-language research has driven the development of large-scale datasets that enable data-intensive machine learning techniques. In comparison, limited effort has been made at assessing the fitness of these dat
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::40520fe8719498d199407300a2c863f2
http://arxiv.org/abs/2112.00431
http://arxiv.org/abs/2112.00431
Publikováno v:
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).
Grounding language queries in videos aims at identifying the time interval (or moment) semantically relevant to a language query. The solution to this challenging task demands understanding videos' and queries' semantic content and the fine-grained r
Autor:
Mauro Valorani, Riccardo Malpica Galassi, Lorenzo Angelilli, Pietro Paolo Ciottoli, Mattia Soldan, Zhen Lu, Hong G. Im, Francisco E. Hernández Pérez
Publikováno v:
AIAA Scitech 2021 Forum.
The authors acknowledge the support of King Abdullah University of Science and Technology (KAUST). Computational resources were provided by the KAUST Supercomputing Laboratory (KSL). This project has received funding from the European Research Counci
Autor:
Marco Romagnoni, Vincenzo Guidi, Laura Bandiera, Davide De Salvador, Andrea Mazzolari, Francesco Sgarbossa, Mattia Soldani, Alexei Sytov, Melissa Tamisari
Publikováno v:
Crystals, Vol 12, Iss 9, p 1263 (2022)
Bent crystal are widely used as optics for X-rays, but via the phenomenon of planar channeling they may act as waveguide for relativistic charged particles beam as well, outperforming some of the traditional technologies currently employed. A physica
Externí odkaz:
https://doaj.org/article/0e0c2d6ddfd04b5bbef93f9d51272f57
Autor:
Kevin Qinghong Lin, Alex Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Zhongcong Xu, Difei Gao, Dima Damen, Bernard Ghanem, Wei Liu, Mike Zheng Shou
Publikováno v:
University of Bristol-PURE
Lin, K Q, Wang, A J, Soldan, M, Wray, M, Yan, R, Xu, E Z, Gao, D, Damen, D, Ghanem, B, Liu, W & Shou, M Z 2022, ' Egocentric Video-Language Pretraining ', Paper presented at Neural Information Processing Systems (NeurIPS), 6/12/20-12/12/20 .
Lin, K Q, Wang, A J, Soldan, M, Wray, M, Yan, R, Xu, E Z, Gao, D, Damen, D, Ghanem, B, Liu, W & Shou, M Z 2022, ' Egocentric Video-Language Pretraining ', Paper presented at Neural Information Processing Systems (NeurIPS), 6/12/20-12/12/20 .
Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video- text da
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::07c33b57554daf50f41f4a928623fb78
https://research-information.bris.ac.uk/en/publications/8739b625-0e5c-4e5e-9e14-de7b1f1aa12e
https://research-information.bris.ac.uk/en/publications/8739b625-0e5c-4e5e-9e14-de7b1f1aa12e