Zobrazeno 1 - 10
of 233
pro vyhledávání: '"Chacón Carlos"'
Automatic music transcription (AMT) for musical performances is a long standing problem in the field of Music Information Retrieval (MIR). Visual piano transcription (VPT) is a multimodal subproblem of AMT which focuses on extracting a symbolic repre
Externí odkaz:
http://arxiv.org/abs/2411.09037
Autor:
Zhang, Huan, Chowdhury, Shreyan, Cancino-Chacón, Carlos Eduardo, Liang, Jinhua, Dixon, Simon, Widmer, Gerhard
In the pursuit of developing expressive music performance models using artificial intelligence, this paper introduces DExter, a new approach leveraging diffusion probabilistic models to render Western classical piano performances. In this approach, p
Externí odkaz:
http://arxiv.org/abs/2406.14850
Automatic piano transcription models are typically evaluated using simple frame- or note-wise information retrieval (IR) metrics. Such benchmark metrics do not provide insights into the transcription quality of specific musical aspects such as articu
Externí odkaz:
http://arxiv.org/abs/2406.08454
Publikováno v:
Proceedings of the Forum for Information Retrieval Evaluation, FIRE, 2023, Panjim, India
Semantic embeddings play a crucial role in natural language-based information retrieval. Embedding models represent words and contexts as vectors whose spatial configuration is derived from the distribution of words in large text corpora. While such
Externí odkaz:
http://arxiv.org/abs/2401.02979
Autor:
Peter, Silvan David, Cancino-Chacón, Carlos Eduardo, Karystinaios, Emmanouil, Widmer, Gerhard
Publikováno v:
10th International Conference on Digital Libraries for Musicology, November 10, 2023, Milan, Italy
Generative models of expressive piano performance are usually assessed by comparing their predictions to a reference human performance. A generative algorithm is taken to be better than competing ones if it produces performances that are closer to a
Externí odkaz:
http://arxiv.org/abs/2401.00471
Autor:
Zhang, Huan, Karystinaios, Emmanouil, Dixon, Simon, Widmer, Gerhard, Cancino-Chacón, Carlos Eduardo
Publikováno v:
Proceedings of the 24th International Society for Music Information Retrieval Conference (ISMIR 2023), Milan, Italy
Music Information Retrieval (MIR) has seen a recent surge in deep learning-based approaches, which often involve encoding symbolic music (i.e., music represented in terms of discrete note events) in an image-like or language like fashion. However, sy
Externí odkaz:
http://arxiv.org/abs/2309.02567
Autor:
Cancino-Chacón, Carlos, Peter, Silvan, Hu, Patricia, Karystinaios, Emmanouil, Henkel, Florian, Foscarin, Francesco, Varga, Nimrod, Widmer, Gerhard
This paper introduces the ACCompanion, an expressive accompaniment system. Similarly to a musician who accompanies a soloist playing a given musical piece, our system can produce a human-like rendition of the accompaniment part that follows the soloi
Externí odkaz:
http://arxiv.org/abs/2304.12939
Autor:
Foscarin, Francesco, Karystinaios, Emmanouil, Peter, Silvan David, Cancino-Chacón, Carlos, Grachten, Maarten, Widmer, Gerhard
Publikováno v:
Proceedings of the Music Encoding Conference (MEC), 2022, Halifax, Canada
This paper presents the specifications of match: a file format that extends a MIDI human performance with note-, beat-, and downbeat-level alignments to a corresponding musical score. This enables advanced analyses of the performance that are relevan
Externí odkaz:
http://arxiv.org/abs/2206.01104
Autor:
Cancino-Chacón, Carlos, Peter, Silvan David, Karystinaios, Emmanouil, Foscarin, Francesco, Grachten, Maarten, Widmer, Gerhard
Publikováno v:
Proceedings of the Music Encoding Conference (MEC), 2022, Halifax, Canada
Partitura is a lightweight Python package for handling symbolic musical information. It provides easy access to features commonly used in music information retrieval tasks, like note arrays (lists of timed pitched events) and 2D piano roll matrices,
Externí odkaz:
http://arxiv.org/abs/2206.01071
In this chapter, we focus on two main categories of visual interaction: body gestures and gaze direction. Our focus on body gestures is motivated by research showing that gesture patterns often change during joint action tasks to become more predicta
Externí odkaz:
http://arxiv.org/abs/2201.13297