Learning Event Representations for Temporal Segmentation of Image Sequences by Dynamic Graph Embedding

Autor: Herwig Wendt, Mariella Dimiccoli
Přispěvatelé: Institut de Robòtica i Informàtica Industrial, Institut de Robòtica i Informàtica Industrial (IRI), Consejo Superior de Investigaciones Científicas [Madrid] (CSIC)-Universitat Politècnica de Catalunya [Barcelona] (UPC), CoMputational imagINg anD viSion (IRIT-MINDS), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (MINECO/ERDF, EU) through the program Ramon y Cajal, National Spanish projectsPID2019-110977GA-I00, RED2018-102511-T and 2017 SGR1785
Rok vydání: 2020
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
Informàtica::Automàtica i control [Àrees temàtiques de la UPC]
Graph embedding
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
Machine Learning (stat.ML)
02 engineering and technology
External Data Representation
Clustering
Machine Learning (cs.LG)
Pattern recognition [Classificació INSPEC]
[INFO.INFO-TS]Computer Science [cs]/Signal and Image Processing
[INFO.INFO-LG]Computer Science [cs]/Machine Learning [cs.LG]
Semantic similarity
Statistics - Machine Learning
Pattern recognition
0202 electrical engineering
electronic engineering
information engineering

Segmentation
Cluster analysis
graph embedding
Training set
business.industry
Temporal context prediction
Event representations
event representations
Image segmentation
Real image
Temporal segmentation
Computer Graphics and Computer-Aided Design
temporal segmentation
Graph
ge-ometric learning
Geometric learning
Graph (abstract data type)
Embedding
Computer vision
020201 artificial intelligence & image processing
Artificial intelligence
business
temporal contextprediction
Software
clustering
Zdroj: IEEE Transactions on Image Processing
IEEE Transactions on Image Processing, Institute of Electrical and Electronics Engineers, 2021, 30, pp.1476-1486. ⟨10.1109/TIP.2020.3044448⟩
Digital.CSIC. Repositorio Institucional del CSIC
instname
UPCommons. Portal del coneixement obert de la UPC
Universitat Politècnica de Catalunya (UPC)
ISSN: 1941-0042
1057-7149
Popis: Recently, self-supervised learning has proved to be effective to learn representations of events suitable for temporal segmentation in image sequences, where events are understood as sets of temporally adjacent images that are semantically perceived as a whole. However, although this approach does not require expensive manual annotations, it is data hungry and suffers from domain adaptation problems. As an alternative, in this work, we propose a novel approach for learning event representations named Dynamic Graph Embedding (DGE). The assumption underlying our model is that a sequence of images can be represented by a graph that encodes both semantic and temporal similarity. The key novelty of DGE is to learn jointly the graph and its graph embedding. At its core, DGE works by iterating over two steps: 1) updating the graph representing the semantic and temporal similarity of the data based on the current data representation, and 2) updating the data representation to take into account the current data graph structure. The main advantage of DGE over state-of-the-art self-supervised approaches is that it does not require any training set, but instead learns iteratively from the data itself a low-dimensional embedding that reflects their temporal and semantic similarity. Experimental results on two benchmark datasets of real image sequences captured at regular time intervals demonstrate that the proposed DGE leads to event representations effective for temporal segmentation. In particular, it achieves robust temporal segmentation on the EDUBSeg and EDUBSeg-Desc benchmark datasets, outperforming the state of the art. Additional experiments on two Human Motion Segmentation benchmark datasets demonstrate the generalization capabilities of the proposed DGE.
Accepted in IEEE Transactions on Image Processing, 2020. To appear
Databáze: OpenAIRE