gms | German Medical Science

25. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

01.03. - 03.03.2023, Köln

Assessing daily-life hearing device experience and individual noise reduction benefit in the lab

Meeting Abstract

  • presenting/speaker Hendrik Kayser - Hörzentrum Oldenburg gGmbH, Oldenburg, DE
  • Theresa Jansen - Hörzentrum Oldenburg gGmbH, Oldenburg, DE
  • Laura Hartog - Hörzentrum Oldenburg gGmbH, Oldenburg, DE
  • Dirk Oetting - Hörzentrum Oldenburg gGmbH, Oldenburg, DE
  • Volker Hohmann - Universität Oldenburg, Oldenburg, DE

Deutsche Gesellschaft für Audiologie e.V.. 25. Jahrestagung der Deutschen Gesellschaft für Audiologie. Köln, 01.-03.03.2023. Düsseldorf: German Medical Science GMS Publishing House; 2023. Doc103

doi: 10.3205/23dga103, urn:nbn:de:0183-23dga1034

Published: March 1, 2023

© 2023 Kayser et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

The goal of hearing devices is to provide the user with a benefit in speech reception in everyday-life listening situations. The prediction of such benefit from measurements carried out in a laboratory environment remains difficult as on the one hand reproducible measurement setups are required and on the other hand realistic, sufficiently complex listening situations must be provided that capture a hearing-aid user’s daily-life experience. A key to reproducible complex listening scenarios is the use of virtual acoustics. In the current study a set of virtual acoustic scenes with varying complexity was used to investigate the effect of different signal processing methods on speech recognition and listening effort in 20 hearing impaired subjects. The acoustic scenes were presented via a 16-loudspeaker setup rendered with the Toolbox for Acoustic Scene Creation and Rendering (TASCAR, [1]). All participants completed the measurements with their own hearing devices and with a hearing device research platform, the Portable Hearing Laboratory (PHL, [2]), which was fitted with individual earmolds and gain prescription according to a loudness-based fitting rule. In addition to amplification and dynamic range compression three different signal enhancement methods were implemented on the PHL using the open Master Hearing Aid (openMHA, [3]) software: binaural coherence filtering aiming at spectral signal enhancement, adaptive differential microphones aiming at suppression of noise from the rear hemisphere, and binaural minimum variance distortionless response beamforming aiming at amplification of signal components from the fixed frontal direction while suppressing noise from all other directions. To be able to link the outcomes of the laboratory measurements to daily life each participant completed a questionnaire related to speech reception in quiet and noisy conditions, spatial sound reception and loudness perception with their own hearing devices. We compared speech reception measured in the laboratory with own hearing devices with the questionnaire results and found significant correlations showing the relevance of the laboratory scenes for hearing aid experience in daily life. The analysis of the hearing aid benefit in speech reception with the PHL for a moderate speech level of 65 dB in different background noise configurations showed large inter-individual differences and different effects of the signal enhancement algorithms. These data indicate that the use of spatial signal enhancement in addition to amplification does not guarantee a further benefit in speech reception. In some cases, we even found a detrimental effect. Vice versa, some participants did not benefit from amplification, but from the use of signal enhancement algorithms. This suggests an interplay of the amount of spatial information about the acoustic scenes that remains after the signal processing and the ability of a subject to exploit such information.


References

1.
Grimm G, Luberadzka J, Hohmann V. A toolbox for rendering virtual acoustic environments in the context of audiology. Acta Acustica united with Acustica. 2019;105(3):566–78. DOI: 10.3813/AAA.919337 External link
2.
Pavlovic C, Kassayan R, Prakash S R, Kayser H, Hohmann V, Atamaniuk A. A high-fidelity multi-channel portable platform for development of novel algorithms for assistive listening wearables. The Journal of the Acoustical Society of America. 2019;146(4):2878.
3.
Kayser H, Herzke T, Maanen P, Zimmermann M, Grimm G, Hohmann V. Open community platform for hearing aid algorithm research: open Master Hearing Aid (openMHA). SoftwareX. 2022;17:100953. DOI: 10.1016/j.softx.2021.100953 External link