Deep learning for musical scenario inference and prediction: Application to structured co-improvisation
Autor: | Bazin, Théis |
---|---|
Přispěvatelé: | Représentations musicales (Repmus), Sciences et Technologies de la Musique et du Son (STMS), Institut de Recherche et Coordination Acoustique/Musique (IRCAM)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS)-Institut de Recherche et Coordination Acoustique/Musique (IRCAM)-Université Pierre et Marie Curie - Paris 6 (UPMC)-Centre National de la Recherche Scientifique (CNRS), École normale supérieure - Cachan (ENS Cachan), Ircam UMR STMS 9912, ANR-14-CE24-0002,DYCI2,Dynamiques créatives de l'interaction improvisée(2014) |
Jazyk: | angličtina |
Rok vydání: | 2016 |
Předmět: |
machine learning
[SHS.MUSIQ]Humanities and Social Sciences/Musicology and performing arts recurrent networks style modeling [INFO.INFO-SD]Computer Science [cs]/Sound [cs.SD] [INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] [INFO.INFO-OH]Computer Science [cs]/Other [cs.OH] co-improvisation [INFO.INFO-MM]Computer Science [cs]/Multimedia [cs.MM] musical scenario inference deep learning neural networks [INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] |
Zdroj: | [Internship report] Ircam UMR STMS 9912. 2016 |
Popis: | The field of musical scenario inference aims at developing systems and algorithms to automatically extract abstract temporal scenarios in music. We call scenario any underlying symbolic sequence that constitutes a higher-level abstraction of an original input sequence. Such an underlying sequence implicitly encodes the temporal relations between events in a musical piece by producing an ordered series of symbols. Musical works exhibit temporal dependencies at multiple time-scales, from local melodic events to long-term harmonic progressions. Multiple systems have been introduced in order to capture short or long term dependencies between musical events. Nonetheless, existing systems fail at taking into account the interactions between these various time scales. In this research project, we propose a method to tackle this issue and infer abstract scenarios through the use of deep recurrent neural networks. We introduce a system that is able to extract an abstract sequence of symbols from an input musical sequence, as well as perform predictions on the probable continuations of this sequence. A theoretical application to the co-improvisation problem is introduced. Co-improvisation engines seek to generate new sequences resembling some example input sequence. A crucial aspect of such co-improvisation systems is the ability to introduce anticipations, so as to generate transitions between different parts. This requires knowledge of some underlying scenario to the generation. Existing systems that offer prediction capacities rely on a pre-defined abstract scenario. The architecture we propose would improve on this by replacing this pre-defined scenario with one automatically inferred in real-time by our scenario inference and prediction tool, incorporating dynamically refined short-term predictions over the future. Through dynamic training via adversarial training, this system can furthermore improve the accuracy of its predictions in real-time. |
Databáze: | OpenAIRE |
Externí odkaz: |