Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval
Autor: | Suhua Tang, Yi Yu, Francisco Raposo, Lei Chen |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Computer Networks and Communications Computer science Speech recognition Feature extraction 02 engineering and technology Music knowledge discovery Convolutional neural network Computer Science - Sound Computer Science - Information Retrieval Audio and Speech Processing (eess.AS) ComputerApplications_MISCELLANEOUS FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Feature (machine learning) Deep cross-modal models Audio signal Modality (human–computer interaction) Cross-modal music retrieval Correlation learning between audio and lyrics 020206 networking & telecommunications Lyrics Recurrent neural network Hardware and Architecture Convolutional neural networks 020201 artificial intelligence & image processing Joint (audio engineering) Information Retrieval (cs.IR) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ACM Transactions on Multimedia Computing, Communications, and Applications. 15:1-16 |
ISSN: | 1551-6865 1551-6857 |
DOI: | 10.1145/3281746 |
Popis: | Deep cross-modal learning has successfully demonstrated excellent performance in cross-modal multimedia retrieval, with the aim of learning joint representations between different data modalities. Unfortunately, little research focuses on cross-modal correlation learning where temporal structures of different data modalities, such as audio and lyrics, should be taken into account. Stemming from the characteristic of temporal structures of music in nature, we are motivated to learn the deep sequential correlation between audio and lyrics. In this work, we propose a deep cross-modal correlation learning architecture involving two-branch deep neural networks for audio modality and text modality (lyrics). Data in different modalities are converted to the same canonical space where intermodal canonical correlation analysis is utilized as an objective function to calculate the similarity of temporal structures. This is the first study that uses deep architectures for learning the temporal correlation between audio and lyrics. A pretrained Doc2Vec model followed by fully connected layers is used to represent lyrics. Two significant contributions are made in the audio branch, as follows: (i) We propose an end-to-end network to learn cross-modal correlation between audio and lyrics, where feature extraction and correlation learning are simultaneously performed and joint representation is learned by considering temporal structures. (ii) And, as for feature extraction, we further represent an audio signal by a short sequence of local summaries (VGG16 features) and apply a recurrent neural network to compute a compact feature that better learns the temporal structures of music audio. Experimental results, using audio to retrieve lyrics or using lyrics to retrieve audio, verify the effectiveness of the proposed deep correlation learning architectures in cross-modal music retrieval. |
Databáze: | OpenAIRE |
Externí odkaz: |