Popis: |
Recent advances in automatic music transcription have facilitated the creation of large databases of symbolic transcriptions of improvised music forms including jazz, where traditional notated scores are not normally available. In conjunction with music source separation models that enable audio to be “demixed” into separate signals for multiple instrument classes, these algorithms can also be applied to generate annotations for every musician in a performance. This can enable the analysis of interesting performer-level and ensemble-level features that have often been difficult to explore. To this end, we introduce Jazz Trio Database (JTD), a dataset of 44.5 h of jazz piano solos accompanied by bass and drums, with automatically generated annotations for each performer. These annotations consist of onset, beat, and downbeat timestamps, alongside MIDI for the piano soloist. Suitable recordings, broadly representative of the “straight-ahead” jazz style, were identified by scraping user-based listening and discographic data; source separation models were applied to isolate audio for each performer in the trio; annotations were generated by applying appropriate algorithms to both the separated and the mixed audio sources. Onset annotations generated by the pipeline achieved a mean F-measure of 0.94 when compared with ground truth annotations. We conduct several analyses of JTD, including with relation to swing and inter-performer synchronization. We anticipate that JTD will be useful in a variety of music information–retrieval tasks, including artist identification and expressive performance modeling. We have made JTD, including the annotations and associated source code, available at https://github.com/HuwCheston/Jazz-Trio-Database |