Zobrazeno 1 - 10
of 11
pro vyhledávání: '"Xavier Favory"'
Publikováno v:
Proceedings of the XXth Conference of Open Innovations Association FRUCT, Vol 602, Iss 23, Pp 447-451 (2018)
Properly annotated multimedia content is crucial for supporting advances in many Information Retrieval applica- tions. It enables, for instance, the development of automatic tools for the annotation of large and diverse multimedia collections. In the
Externí odkaz:
https://doaj.org/article/f6c8fa4f62104c49b2bf184c66e0153f
Most existing datasets for sound event recognition (SER) are relatively small and/or domain-specific, with the exception of AudioSet, based on over 2M tracks from YouTube videos and encompassing over 500 sound classes. However, AudioSet is not an ope
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::146e9b2ab6d908432ec3f12a83082072
http://arxiv.org/abs/2010.00475
http://arxiv.org/abs/2010.00475
Publikováno v:
Tampere University
Audio representation learning based on deep neural networks (DNNs) emerged as an alternative approach to hand-crafted features. For achieving high performance, DNNs often need a large amount of annotated data which can be difficult and costly to obta
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::781283319cd4d6e57fae6c51e831da0b
http://arxiv.org/abs/2006.08386
http://arxiv.org/abs/2006.08386
Publikováno v:
ICASSP
Self-supervised audio representation learning offers an attractive alternative for obtaining generic audio embeddings, capable to be employed into various downstream tasks. Published approaches that consider both audio and words/tags associated with
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::ae993df56280be4c4b939d15ae69be05
Publikováno v:
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
ICASSP
ICASSP
Comunicació presentada a: ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, celebrat en línia del 4 al 8 de maig de 2020. We present a deep neural network-based methodology for synthesising percussive sounds with
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c3b632fca229a4ed3af3c12f15075812
http://arxiv.org/abs/1911.11853
http://arxiv.org/abs/1911.11853
Autor:
Daniel P. W. Ellis, Manoj Plakal, Xavier Favory, Xavier Serra, Frederic Font, Eduardo Fonseca
Publikováno v:
ICASSP
Recercat. Dipósit de la Recerca de Catalunya
instname
Recercat. Dipósit de la Recerca de Catalunya
instname
As sound event classification moves towards larger datasets, issues of label noise become inevitable. Web sites can supply large volumes of user-contributed audio and metadata, but inferring labels from this metadata introduces errors due to unreliab
Publikováno v:
CHI EA '16: ACM Extended Abstracts on Human Factors in Computing Systems.
CHI EA '16: ACM Extended Abstracts on Human Factors in Computing Systems., May 2016, San Jose, United States. pp.4, ⟨10.1145/2851581.2890246⟩
CHI Extended Abstracts
CHI EA '16: ACM Extended Abstracts on Human Factors in Computing Systems., May 2016, San Jose, United States. pp.4, ⟨10.1145/2851581.2890246⟩
CHI Extended Abstracts
International audience; Trajectoires is a mobile application that lets composers draw trajectories of sound sources to remotely control any spatial audio renderer using the Open Sound Control protocol. Interviews and collaborations with contemporary
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::59aca3bf9676d73fe0ffca8cbda67295
https://hal.inria.fr/hal-01285852/document
https://hal.inria.fr/hal-01285852/document
Publikováno v:
Actes de la 27ème conférence francophone sur l'Interaction Homme-Machine.
27ème conférence francophone sur l'Interaction Homme-Machine.
27ème conférence francophone sur l'Interaction Homme-Machine., Oct 2015, Toulouse, France. pp.a5, ⟨10.1145/2820619.2820624⟩
IHM
27ème conférence francophone sur l'Interaction Homme-Machine.
27ème conférence francophone sur l'Interaction Homme-Machine., Oct 2015, Toulouse, France. pp.a5, ⟨10.1145/2820619.2820624⟩
IHM
International audience; In this paper, we explore the potential of mobile devices for the control and the composition of sound spatialization. We conducted interviews with four composers to understand their needs and guide the design of a new mobile
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::55fc78b4578d87426313f88b30e69099
https://hal.archives-ouvertes.fr/hal-01218595/document
https://hal.archives-ouvertes.fr/hal-01218595/document
Publikováno v:
International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Apr 2015, Brisbane, Australia
ICASSP
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Apr 2015, Brisbane, Australia
ICASSP
International audience; The intuitive control of voice transformation (e.g., age/sex, emotions) is useful to extend the expressive repertoire of a voice. This paper explores the role of glottal source parameters for the control of voice transformatio
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::555ff1470d1e96358d09cd5d35ea595f
https://hal.archives-ouvertes.fr/hal-01164562/file/index.pdf
https://hal.archives-ouvertes.fr/hal-01164562/file/index.pdf
Publikováno v:
inSONIC2015, Aesthetics of Spatial Audio in Sound, Music and Sound Art
inSONIC2015, Aesthetics of Spatial Audio in Sound, Music and Sound Art, 2015, Karlsruhe, Germany
HAL
inSONIC2015, Aesthetics of Spatial Audio in Sound, Music and Sound Art, 2015, Karlsruhe, Germany
HAL
International audience; We present recent works carried out in the OpenMusic computer-aided composition environment for combining compositional processes with spatial audio rendering. We consider new modalities for manipulating sound spatialization d
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::64150998317f584325e3a4702aa0a4ec
https://hal.archives-ouvertes.fr/hal-01226263/file/inSonic2015-140.pdf
https://hal.archives-ouvertes.fr/hal-01226263/file/inSonic2015-140.pdf