Insight Centre for Data Analytics (DCU) at TRECVid 2014: instance search and semantic indexing tasks
Autor: | Mcguinness, K., Mohedano, E., Zhang, Z., Feiyan Hu, Albatal, R., Gurrin, C., O Connor, N. E., Smeaton, A. F., Salvador, A., Giró-I-Nieto, X., Ventura, C. |
---|---|
Přispěvatelé: | Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, Universitat Politècnica de Catalunya. GPI - Grup de Processament d'Imatge i Vídeo |
Jazyk: | angličtina |
Rok vydání: | 2014 |
Předmět: |
Signal processing
Digital video Pattern recognition systems Enginyeria de la telecomunicació::Processament del senyal::Processament de la imatge i del senyal vídeo [Àrees temàtiques de la UPC] Image processing So imatge i multimèdia::Creació multimèdia::Vídeo digital [Àrees temàtiques de la UPC] Web semàntica Semàntica computacional Vídeo digital Machine learning Information retrieval Reconeixement de formes (Informàtica) Semantic computing Multimedia systems Semantic Web |
Zdroj: | Scopus-Elsevier UPCommons. Portal del coneixement obert de la UPC Universitat Politècnica de Catalunya (UPC) Recercat. Dipósit de la Recerca de Catalunya Universitat Jaume I McGuinness, Kevin ORCID: 0000-0003-1336-6477 |
Popis: | Insight-DCU participated in the instance search (INS) and semantic indexing (SIN) tasks in 2014. Two very different approaches were submitted for instance search, one based on features extracted using pre-trained deep convolutional neural networks (CNNs), and another based on local SIFT features, large vocabulary visual bag-of-words aggregation, inverted index-based lookup, and geometric verification on the top-N retrieved results. Two interactive runs and two automatic runs were submitted, the best interactive runs achieved a mAP of 0.135 and the best automatic 0.12. Our semantic indexing runs were based also on using convolutional neural network features, and on Support Vector Machine classifiers with linear and RBF kernels. One run was submitted to the main task, two to the no annotation task, and one to the progress task. Data for the no-annotation task was gathered from Google Images and ImageNet. The main task run has achieved a mAP of 0.086, the best no-annotation runs had a close performance to the main run by achieving a mAP of 0.080, while the progress run had 0.043. |
Databáze: | OpenAIRE |
Externí odkaz: |