Multimodal Semantics Extraction from User-Generated Videos
Autor: | Kostadin Dabov, Igor Danilo Diego Curcio, Mikko J. Roininen, Sujeet Shyamsundar Mate, Francesco Cricri, Moncef Gabbouj |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2012 |
Předmět: |
Video production
Modalities General Computer Science Point (typography) Article Subject business.industry Computer science Event (computing) ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION Content creation Semantics lcsh:QA75.5-76.95 Set (abstract data type) Key (cryptography) Computer vision lcsh:Electronic computers. Computer science Artificial intelligence business |
Zdroj: | Advances in Multimedia, Vol 2012 (2012) |
ISSN: | 1687-5680 |
DOI: | 10.1155/2012/292064 |
Popis: | User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events) being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium), genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances. |
Databáze: | OpenAIRE |
Externí odkaz: |