gms | German Medical Science

23. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

03.09. - 04.09.2020, Cologne (online conference)

Listening and multitasking in ecologically relevant environments, realized in AVATAR

Meeting Abstract

  • presenting/speaker Annelies Devesse - KU Leuven, Leuven, Belgien
  • Astrid van Wieringen - KU Leuven, Leuven, Belgien
  • Jan Wouters - KU Leuven, Leuven, Belgien

Deutsche Gesellschaft für Audiologie e.V.. 23. Jahrestagung der Deutschen Gesellschaft für Audiologie. Köln, 03.-04.09.2020. Düsseldorf: German Medical Science GMS Publishing House; 2020. Doc014

doi: 10.3205/20dga014, urn:nbn:de:0183-20dga0141

Published: September 3, 2020

© 2020 Devesse et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Question: Having a successful conversation is challenging. Not only do listeners have to process speech auditory-visually, they likely encounter various environmental demands that might hinder speech understanding such as background noise, visual distractors or having to multitask while listening. However, most behavioral measures of auditory functioning are a simplification of daily life and might thus underestimate the amount of listening effort individuals typically exert. To answer the need for an ecologically relevant measure of auditory functioning, we developed and evaluated the Audio-Visual True-to-Life Assessment of Auditory Rehabilitation-paradigm (AVATAR). AVATAR minimizes differences between real-world environments and laboratory test conditions by presenting speech auditory-visually via virtual talkers and by reconstructing various external demands of everyday listening situations in a quantitative and controlled way, including auditory-visual environments, auditory spatial complexity and multitasking.

Methods: First, we validated the quality of the virtual talkers visual speech information in a speech reading and speech intelligibility task. In the speech reading task, skilled speech readers were asked to identify as many speech stimuli as possible from lists of Dutch words and sentences, uttered visual-only by the virtual human talker. In the speech intelligibility task, sentences in noise were presented either auditory-only or auditory-visually to a group of 35 young normal-hearing participants, to investigate the speech benefit of providing visual speech cues. Second, we implemented the virtual talker in the AVATAR-paradigm, which combined an auditory-visual speech-in-noise test with three secondary tasks on auditory localization and visual short-term memory. Secondary task performance was investigated as an estimate of the amount of cognitive resources allocated during listening. Hence, lower secondary task performance was thought to reflect increased levels of listening effort. AVATAR was administered in a group of 35 young normal-hearing adults.

Results: First, skilled speech readers correctly identified up to 67% of the words and sentences uttered by the virtual human talker. Furthermore, visual speech cues improved the speech intelligibility of sentences in noise by 1.5 to 2 dB in young normal-hearing listeners. Second, whereas performance on the speech-in-noise task was stable across all AVATAR-conditions, multitask costs on one of the secondary tasks became significantly larger with increasing task complexity, i.e. when more secondary tasks were added.

Conclusion: Taken together, the results provided support for the applicability of our virtual talker in auditory-visual speech assessment paradigms such as AVATAR. AVATAR itself proved to be a promising model to assess speech intelligibility and to gauge the amount of allocated processing resources during effortful listening in ecologically relevant situations.