A Machine Learning Approach to Discover Rules for Expressive Performance Actions in Jazz Guitar Music
Autor: | Rafael Ramirez, Sergio Giraldo |
---|---|
Rok vydání: | 2016 |
Předmět: |
Phrase
Speech recognition lcsh:BF1-990 Context (language use) 02 engineering and technology computer.software_genre Machine learning 060404 music Key (music) knowledge discovery methods jazz guitar music 0202 electrical engineering electronic engineering information engineering Psychology General Psychology Original Research expressive music performance jazz guitar music ornamentation machine learning business.industry 06 humanities and the arts Expressive music performance expressive music performance Ornamentation Classical music ornamentation Jazz guitar music lcsh:Psychology machine learning Duration (music) 020201 artificial intelligence & image processing Artificial intelligence Guitar Jazz business computer Timbre 0604 arts Natural language processing |
Zdroj: | Frontiers in Psychology Frontiers in Psychology, Vol 7 (2016) Recercat. Dipósit de la Recerca de Catalunya instname |
Popis: | Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. This work has been partly sponsored by the Spanish TIN project TIMUL (TIN2013-48152-C2-2-R), and the European Union Horizon 2020 research and innovation programme under grant agreement No. 688269 (TELMI project). |
Databáze: | OpenAIRE |
Externí odkaz: |