gms | German Medical Science

26. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

06.03. - 08.03.2024, Aalen

Listening 2.0 – beyond passiveness in standard hearing experiments

Meeting Abstract

  • presenting/speaker Martin Orf - Universität zu Lübeck, Institut für Psychologie I, Lübeck, Germany
  • Franziska Scharata - Universität zu Lübeck, Institut für Psychologie I, Lübeck, Germany
  • Sarah Tune - Universität zu Lübeck, Institut für Psychologie I, Lübeck, Germany
  • Kaja Strobel - WS Audiology-Sivantos GmbH, Audiological Research Unit, Erlangen, Germany
  • Ronny Hannemann - WS Audiology-Sivantos GmbH, Audiological Research Unit, Erlangen, Germany
  • Jonas Obleser - Universität zu Lübeck, Institut für Psychologie I, Lübeck, Germany

Deutsche Gesellschaft für Audiologie e.V.. 26. Jahrestagung der Deutschen Gesellschaft für Audiologie. Aalen, 06.-08.03.2024. Düsseldorf: German Medical Science GMS Publishing House; 2024. Doc018

doi: 10.3205/24dga018, urn:nbn:de:0183-24dga0180

Veröffentlicht: 5. März 2024

© 2024 Orf et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Contemporary audiology aims to view age-related hearing loss increadingly in the context of real-life communication behaviour. Not least, presbycusis is increasingly considered a risk factor for long-term consequences such as social isolation, cognitive decline, and depression. Of particular concern, hearing loss in midlife is the single largest modifiable risk factor for later dementia. Hearing aids, the most common conservative treatment for hearing loss, have seen numerous technological advances, including features like dynamic compression, to enhance hearing in challenging environments. Additionally, recent innovations such as motion sensors in hearing aids offer valuable data to comprehend the simultaneous demands of verbal communication and unrestricted movement, as seen in cognitive-motor interference, shedding light on the long-term impacts of hearing loss. In this study, we are primarily interested in exploring the interplay of motor data, such as step length, and variations in gait parameters based on terrain and conversation complexity. In the current project, we aim to understand the participant’s active engagement in the conversation (turn-taking). Therefore, it is essential to this study to record the spoken audio signals produced by the participant and the experimenter in the experiment. This enables the analysis of turn-taking (speech latencies, speech pauses, and speech tempo) and the exploration of autocorrelative patterns in gait parameters, EEG parameters, and speech audio within a multivariate framework using regularised linear models. Here we will present the analysis pipeline, using state-of-the-art machine learning algorithms, and the results of the first pilot subjects.