Speech Tracking in Complex Auditory Scenes with Differentiated In- and Out-Field-Of-View Processing in Hearing Aids

Autor: Adrian, Mai, Maja, Serman, Sebastian, Best, Niels S, Jensen, Jurek, Foellmer, Andreas, Schroeer, Christine, Welsch, Daniel J, Strauss, Farah I, Corona-Strauss
Rok vydání: 2022
Předmět:
Zdroj: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC).
DOI: 10.1109/embc48229.2022.9870826
Popis: In naturalistic auditory scenes, relevant information is rarely concentrated at a single location, but rather unpredictably scattered in- and out-field-of-view (in-/out-FOV). Although the parsing of a complex auditory scene is a fairly simple job for a healthy human auditory system, the uncertainty represents a major issue in the development of effective hearing aid (HA) processing strategies. Whereas traditional omnidirectional microphones (OM) amplify the complete auditory scene without enhancing signal-to-noise-ratio (SNR) between in- and out-FOV streams, directional microphones (DM) may greatly increase SNR at the cost of preventing HA users to perceive out-FOV information. The present study compares the conventional OM and DM HA settings to a split processing (SP) scheme differentiating between in- and out-FOV processing. We recorded electroencephalographic data of ten young, normal-hearing listeners who solved a cocktail-party-scenario-paradigm with continuous auditory streams and analyzed neural tracking of speech with a stimulus reconstruction (SR) approach. While results for all settings exhibited significantly higher SR accuracies for attended in-FOV than unattended out-FOV streams, there were distinct differences between settings. In-FOV SR performance was dominated by DM and SP and out-FOV SR accuracies were significantly higher for SP compared to OM and DM. Our results demonstrate the potential of a SP approach to combine the advantages of traditional OM and DM settings without introduction of significant compromises.
Databáze: OpenAIRE