Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Stylianos Ioannis Mimilakis"'
Publikováno v:
Electronics, Vol 10, Iss 851, p 851 (2021)
Electronics
Volume 10
Issue 7
Electronics
Volume 10
Issue 7
In this work, we propose considering the information from a polyphony for multi-pitch estimation (MPE) in piano music recordings. To that aim, we propose a method for local polyphony estimation (LPE), which is based on convolutional neural networks (
Publikováno v:
ICASSP
Audio production is a difficult process for many people], [and properly manipulating sound to achieve a certain effect is non-trivial. In this paper], [we present a method that facilitates this process by inferring appropriate audio effect parameters
Publikováno v:
EUSIPCO
In this work, we present a method for learning interpretable music signal representations directly from waveform signals. Our method can be trained using unsupervised objectives and relies on the denoising auto-encoder model that uses a simple sinuso
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d9a86b7f724f8f4f0de8ced4ad7ca5ce
http://arxiv.org/abs/2003.01567
http://arxiv.org/abs/2003.01567
Autor:
Christof Weiss, Meinard Müller, Jakob Abeßer, Vlora Arifi-Müller, Stylianos Ioannis Mimilakis
Publikováno v:
Machine Learning and Knowledge Discovery in Databases ISBN: 9783030438869
PKDD/ECML Workshops (2)
PKDD/ECML Workshops (2)
In this paper, we approach the problem of detecting segments of singing voice activity in opera recordings. We consider three state-of-the-art methods for singing voice detection based on supervised deep learning. We train and test these models on a
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::4ce4b44425fbd6eb346e640f710bb233
https://doi.org/10.1007/978-3-030-43887-6_35
https://doi.org/10.1007/978-3-030-43887-6_35
Autor:
Tuomas Virtanen, Shayan Gharib, Konstantinos Drossos, Stylianos Ioannis Mimilakis, Yanxiong Li
Publikováno v:
IJCNN
State-of-the-art sound event detection (SED) methods usually employ a series of convolutional neural networks (CNNs) to extract useful features from the input audio signal, and then recurrent neural networks (RNNs) to model longer temporal context in
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1039af525877d143f4d8e25862606900
The goal of this article is to investigate what singing voice separation approaches based on neural networks learn from the data. We examine the mapping functions of neural networks based on the denoising autoencoder (DAE) model that are conditioned
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b8de8b8fa19b861f7159df54180ee59f
http://arxiv.org/abs/1904.06157
http://arxiv.org/abs/1904.06157
Autor:
Christian Kühn, Tobias Clauß, Marco Götze, Hanna Lukashevich, Jakob Abeßer, Stylianos Ioannis Mimilakis, Dominik Zapf, Stephanie Kühnlenz
Publikováno v:
Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019).
Autor:
Derry Fitzgerald, Gerald Schuller, Stylianos Ioannis Mimilakis, Konstantinos Drossos, Estefanía Cano
Publikováno v:
ACSSC
In this study, we examine the effect of various objective functions used to optimize the recently proposed deep learning architecture for singing voice separation MaD - Masker and Denoiser. The parameters of the MaD architecture are optimized using a
Publikováno v:
International Workshop on Acoustic Signal Enhancement
International Workshop on Acoustic Signal Enhancement, Sep 2018, Tokyo, Japan
IWAENC
2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC)
International Workshop on Acoustic Signal Enhancement, Sep 2018, Tokyo, Japan
IWAENC
2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC)
International audience; Harmonic/percussive source separation (HPSS) consists in separating the pitched instruments from the percussive parts in a music mixture. In this paper, we propose to apply the recently introduced Masker-Denoiser with twin net
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::dfc4b662cd810553dbef4e2e8141c807
https://hal.archives-ouvertes.fr/hal-01812225v2/document
https://hal.archives-ouvertes.fr/hal-01812225v2/document
Publikováno v:
INTERSPEECH
Tampere University
Interspeech 2018
Interspeech
Interspeech, Sep 2018, Hyderabad, India
Tampere University
Interspeech 2018
Interspeech
Interspeech, Sep 2018, Hyderabad, India
International audience; State-of-the-art methods for monaural singing voice separation consist in estimating the magnitude spectrum of the voice in the short-term Fourier transform (STFT) domain by means of deep neural networks (DNNs). The resulting