gms | German Medical Science

25. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

01.03. - 03.03.2023, Köln

Listening and multitasking using an ecological relevant paradigm AVATAR

Meeting Abstract

  • presenting/speaker Astrid van Wieringen - KU Leuven, Leuven, BE
  • Annelies Devesse - KU Leuven, Leuven, BE
  • Lyan Porto - KU Leuven, Leuven, BE
  • Mira Van Wilderode - KU Leuven, Leuven, BE
  • Jan Wouters - KU Leuven, Leuven, BE

Deutsche Gesellschaft für Audiologie e.V.. 25. Jahrestagung der Deutschen Gesellschaft für Audiologie. Köln, 01.-03.03.2023. Düsseldorf: German Medical Science GMS Publishing House; 2023. Doc013

doi: 10.3205/23dga013, urn:nbn:de:0183-23dga0134

Veröffentlicht: 1. März 2023

© 2023 van Wieringen et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Hearing ability and potential benefit of hearing amplification are traditionally evaluated using single outcome measures (e.g. speech in quiet, one speaker). Very often these outcomes do not reflect daily hearing difficulties. Various environmental demands hinder speech understanding, including background noise, visual distractors or having to multitask while listening. To address the need for an ecologically relevant measure of auditory functioning, we developed and evaluated the Audio-Visual True-to-Life Assessment of Auditory Rehabilitation-paradigm (AVATAR). AVATAR minimizes differences between real-world environments and laboratory test conditions by presenting speech auditory-visually via virtual talkers and by reconstructing various external demands of everyday listening situations in a quantitative and controlled way. A listener is immersed in a complex and challenging audio-visual environment. Different true-to-life figures are projected from different directions in different scenarios (e.g. restaurant, train). The complexity of a listening condition is varied by combining different cues, such as auditory and/or visual cues presented separately or together, presence or absence of noise, moving sound sources. I will present the highlights of different studies aimed at capturing speech in noise understanding in multitasking conditions. In a first series of studies [1], [2], [3]) a virtual talker in the AVATAR-paradigm was presented with a visual short-term memory task, and dynamic cues. Data were collected for young and middle-aged persons with and without hearing impairment. Subsequently, we have started to investigate how changes in posture affect speech understanding in noise. In addition, the listening paradigm was extended to multiple speakers and conditions of competing talkers will be presented. In summary, AVATAR proves to be a promising model to assess speech intelligibility in noise and to gauge the amount of allocated processing resources during effortful listening in ecologically relevant situations.


References

1.
Devesse A, Wouters J, van Wieringen A. Age affects speech understanding and multitask costs. Ear Hear. 2020 Sep/Oct;41(5):1412–5. DOI: 10.1097/AUD.0000000000000848 Externer Link
2.
Devesse A, van Wieringen A, Wouters J. AVATAR assesses speech understanding and multitask costs in ecologically relevant listening situations. Ear Hear. 2020 May/Jun;41(3):521–31. DOI: 10.1097/AUD.0000000000000778 Externer Link
3.
Devesse A, van Wieringen A, Wouters J. The cost of intrinsic and extrinsic cognitive demands on auditory functioning in older adults with normal hearing or using hearing aids. Ear Hear. 2021 May/Jun;42(3):615–28. DOI: 10.1097/AUD.0000000000000963 Externer Link