The SONICOM Project: artificial intelligence-driven immersive audio, from personalization to modeling
Autor: | Picinali, L, Katz, Bfg, Geronazzo, M, Majdak, P, Reyes-Lecuona, A, Vinciarelli, A |
---|---|
Přispěvatelé: | Commission of the European Communities, Imperial College London, Lutheries - Acoustique - Musique (IJLRDA-LAM), Institut Jean Le Rond d'Alembert (DALEMBERT), Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), Austrian Academy of Sciences (OeAW), Universidad de Málaga [Málaga] = University of Málaga [Málaga], University of Glasgow, European Project: 101017743,SONICOM |
Rok vydání: | 2022 |
Předmět: | |
Zdroj: | IEEE Signal Processing Magazine IEEE Signal Processing Magazine, 2022, 39 (6), pp.85-88. ⟨10.1109/MSP.2022.3182929⟩ |
ISSN: | 1053-5888 |
DOI: | 10.1109/MSP.2022.3182929⟩ |
Popis: | Every individual perceives spatial audio differently, due in large part to the unique and complex shape of ears and head. Therefore, high-quality, headphone-based spatial audio should be uniquely tailored to each listener in an effective and efficient manner. Artificial intelligence (AI) is a powerful tool that can be used to drive forward research in spatial audio personalization. The SONICOM project aims to employ a data-driven approach that links physiological characteristics of the ear to the individual acoustic filters, which allows us to localize sound sources and perceive them as being located around us. A small amount of data acquired from users could allow personalized audio experiences, and AI could facilitate this by offering a new perspective on the matter. A Bayesian approach to computational neuroscience and binaural sound reproduction will be linked to create a metric for AI-based algorithms that will predict realistic spatial audio quality. Being able to consistently and repeatedly evaluate and quantify the improvements brought by technological advancements, as well as the impact these have on complex interactions in virtual environments, will be key for the development of new techniques and for unlocking new approaches to understanding the mechanisms of human spatial hearing and communication. |
Databáze: | OpenAIRE |
Externí odkaz: |