Next-active-object prediction from egocentric videos

Autor: Sebastiano Battiato, Kristen Grauman, Antonino Furnari, Giovanni Maria Farinella
Rok vydání: 2017
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
Exploit
Computer Science - Artificial Intelligence
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Computer Science - Computer Vision and Pattern Recognition
02 engineering and technology
Machine Learning (cs.LG)
Sliding window protocol
Atomic actions
0202 electrical engineering
electronic engineering
information engineering

Media Technology
Computer vision
Object interaction
Egocentric vision
Next-active-object
Electrical and Electronic Engineering
Forecasting
Signal Processing
1707
business.industry
020207 software engineering
Wearable systems
Artificial Intelligence (cs.AI)
First person
020201 artificial intelligence & image processing
Computer Vision and Pattern Recognition
Artificial intelligence
business
Classifier (UML)
Zdroj: Journal of Visual Communication and Image Representation. 49:401-411
ISSN: 1047-3203
DOI: 10.1016/j.jvcir.2017.10.004
Popis: Although First Person Vision systems can sense the environment from the user’s perspective, they are generally unable to predict his intentions and goals. Since human activities can be decomposed in terms of atomic actions and interactions with objects, intelligent wearable systems would benefit from the ability to anticipate user-object interactions. Even if this task is not trivial, the First Person Vision paradigm can provide important cues to address this challenge. We propose to exploit the dynamics of the scene to recognize next-active-objects before an object interaction begins. We train a classifier to discriminate trajectories leading to an object activation from all others and forecast next-active-objects by analyzing fixed-length trajectory segments within a temporal sliding window. The proposed method compares favorably with respect to several baselines on the Activity of Daily Living (ADL) egocentric dataset comprising 10 h of videos acquired by 20 subjects while performing unconstrained interactions with several objects.
Databáze: OpenAIRE