Insight Centre for Data Analytics (DCU) at TRECVid 2014: instance search and semantic indexing tasks

Autor: Mcguinness, K., Mohedano, E., Zhang, Z., Feiyan Hu, Albatal, R., Gurrin, C., O Connor, N. E., Smeaton, A. F., Salvador, A., Giró-I-Nieto, X., Ventura, C.
Přispěvatelé: Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, Universitat Politècnica de Catalunya. GPI - Grup de Processament d'Imatge i Vídeo
Jazyk: angličtina
Rok vydání: 2014
Předmět:
Zdroj: Scopus-Elsevier
UPCommons. Portal del coneixement obert de la UPC
Universitat Politècnica de Catalunya (UPC)
Recercat. Dipósit de la Recerca de Catalunya
Universitat Jaume I
McGuinness, Kevin ORCID: 0000-0003-1336-6477 , Mohedano, Eva, Zhang, Zhenxing, Hu, Feiyan ORCID: 0000-0001-7451-6438 , Albatal, Rami ORCID: 0000-0002-9269-8578 , Gurrin, Cathal ORCID: 0000-0003-2903-3968 , O'Connor, Noel E. ORCID: 0000-0002-4033-9135 , Smeaton, Alan F. ORCID: 0000-0003-1028-8389 , Salvador, Amaia, Giró-i-Nieto, Xavier ORCID: 0000-0002-9935-5332 and Ventura, Carles (2014) Insight Centre for Data Analytics (DCU) at TRECVid 2014: instance search and semantic indexing tasks. In: TRECVid 2014, 8-12 Nov 2014, Orlando FL..
Popis: Insight-DCU participated in the instance search (INS) and semantic indexing (SIN) tasks in 2014. Two very different approaches were submitted for instance search, one based on features extracted using pre-trained deep convolutional neural networks (CNNs), and another based on local SIFT features, large vocabulary visual bag-of-words aggregation, inverted index-based lookup, and geometric verification on the top-N retrieved results. Two interactive runs and two automatic runs were submitted, the best interactive runs achieved a mAP of 0.135 and the best automatic 0.12. Our semantic indexing runs were based also on using convolutional neural network features, and on Support Vector Machine classifiers with linear and RBF kernels. One run was submitted to the main task, two to the no annotation task, and one to the progress task. Data for the no-annotation task was gathered from Google Images and ImageNet. The main task run has achieved a mAP of 0.086, the best no-annotation runs had a close performance to the main run by achieving a mAP of 0.080, while the progress run had 0.043.
Databáze: OpenAIRE