Article
Simultaneous multiple beem steering in hearing aids for optimal speech enhancement in multi-talker communication environments
Search Medline for
Authors
Published: | September 12, 2022 |
---|
Outline
Text
Everyday communication situations include scenes with multiple speakers from different directions, which can be challenging for hearing-impaired people. To improve speech intelligibility in such a scenario, hearing aid signal processing could include steerable beamforming based on directional estimation (DOA) 1. One way to take into account a users hearing wish is to amplify speakers with a steerable beamformer controlled by eye movements 2. Furthermore, multiple steerable beams with one beam directed to each target talker could be advantageous and reduce the detrimental effect of too slow adaptations after fast turn-taking of target talkers 3.In this work, simulations of multiple beams in hearing aids were used to simultaneously enhance target talker signals at different directions. Different simulation levels ranging from pure simulation in the virtual acoustic rendering system to real hearing aid algorithms were applied. Additionally, head movements were taken into account to keep the beam directions constantly pointed towards the target talkers at fixed locations. The SNR benefit after compensating the head movements by adapting the beam direction was investigated. Hence, the following research questions were examined:
- 1.
- Can multiple beam-steering provide an SNR benefit larger than single-beam algorithms in multi-talker turn-taking communication situations?
- 2.
- How does the simulation level of multiple beam-steering interact with SNR benefit?
- 3.
- To what extent can SNR benefit and audio quality be preserved in steerable beamforming when head movement is present and compensated?
The beamformer benefit was evaluated by estimations of the intelligibility-weighted SNR (iSNR) improvement and by distortion measures. The acoustic scenes varied from simple anechoic scenes with 2-4 target talkers in diffuse noise to scenes with multiple target and interfering talkers in a noisy reverberant environment. In the rendering system, head movements were simulated by a parametric head rotation from left to right as well as using measured real movements of subjects. Multiple instances of a Binaural Minimum Variance Distortionless Response (MVDR) beamformer were combined to form multiple beams. An adaptive differential microphone (ADM) served as a reference condition for a prevalent hearing aid algorithm. Simulation of multiple beams in the rendering system served as a reference condition with an optimal directional gain. For the hearing aid beamforming algorithms, HRTFs were either simulated by continuous parametric filtering or by convolving with measured HRIRs. In this study, virtual acoustics, real-time hearing device processing, and realistic head motion data were combined to investigate the potential of advanced user behavior-driven signal processing for hearing devices