gms | German Medical Science

27. Jahrestagung der Deutschen Gesellschaft für Audiologie
und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen

Deutsche Gesellschaft für Audiologie e. V. und ADANO

19. - 21.03.2025, Göttingen

Neural modulations of speech in the context of spoken and sung sentences in cochlear implant patients

Meeting Abstract

  • presenting/speaker Isabella Thurner - Medizinische Universität Innsbruck, Innsbruck Cognitive Neuroscience (ICONE), Innsbruck, Österreich; Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Josef Seebacher - Medizinische Universität Innsbruck, Innsbruck Cognitive Neuroscience (ICONE), Innsbruck, Österreich; Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Annika Rackl - Medizinische Universität Innsbruck, Innsbruck Cognitive Neuroscience (ICONE), Innsbruck, Österreich; Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Philipp Zelger - Medizinische Universität Innsbruck, Innsbruck Cognitive Neuroscience (ICONE), Innsbruck, Österreich; Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Markus Runger - Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Patrick Zorowka - Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Simone Graf - Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich
  • Sonja Rossi - Medizinische Universität Innsbruck, Innsbruck Cognitive Neuroscience (ICONE), Innsbruck, Österreich; Medizinische Universität Innsbruck, Univ.-Klinik für Hör-, Stimm- und Sprachstörungen, Innsbruck, Österreich

Deutsche Gesellschaft für Audiologie e. V. und ADANO. 27. Jahrestagung der Deutschen Gesellschaft für Audiologie und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen. Göttingen, 19.-21.03.2025. Düsseldorf: German Medical Science GMS Publishing House; 2025. Doc174

doi: 10.3205/25dga174, urn:nbn:de:0183-25dga1744

Veröffentlicht: 18. März 2025

© 2025 Thurner et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Research aims: Profound hearing loss strongly affects speech comprehension, but also the perception of music. If meaning of speech and music come together, such as in songs, this might be a very challenging setting for hearing impaired patients. The supply with a cochlear implant (CI) is a beneficial possibility restoring at least partially hearing abilities. Hearing with a CI leads to reorganization processes in the brain. In the present study, we investigate which neural mechanisms support this reorganization and which neuroplasticity effects might emerge especially when patients have to extract meaning from sung sentences compared to spoken sentences.

Design and methods: Patients suffering from profound hearing loss at both ears and supplied with either one CI and one hearing aid or two CIs compared to age-matched normal-hearing (NH) subjects were recruited. During the experiment, semantically correct and incorrect spoken and sung sentences were acoustically presented via speakers in front of the subjects. Temporally fine-grained neural processes and underlying brain areas were simultaneously assessed by the electroencephalography (EEG) and the functional near-infrared spectroscopy (fNIRS).

Results and conclusion: As expected, preliminary results of the EEG show an increased N400 for semantically incorrect compared to correct sentences in NH subjects in both spoken and sung sentences. This process reflects successful semantic integration at the sentential level. Preliminary EEG results of CI patients do not show an N400 component but a more frontally distributed P300 component, larger in amplitude for semantically incorrect compared to correct sentences and present in spoken and sung sentences. The P300 component usually reflects attentional orienting processes resulting in an increased amplitude for stimuli requiring more attention. Thus, CI patients seem to have more difficulties with semantically incorrect sentences and necessitate more processing resources.

Preliminary results of the fNIRS signal in NH subjects revealed widespread larger activations for correct compared to incorrect sentences in both spoken and sung sentences. Additionally, larger activations were found for spoken compared to sung sentences in predominantly left-hemispheric prefrontal and temporo-parietal areas suggesting a larger focusing on linguistic aspects of spoken sentences. Preliminary results in CI patients, on the contrary, show larger activations for semantically incorrect compared to correct sentences in bilateral prefrontal areas, only for spoken sentences indicating increased processing difficulties with sung speech comprehension. Prefrontal areas usually belong to the attention network and are often activated as compensatory mechanisms when temporal areas are not able to support the processing.

Summarizing, neural plasticity effects in CI patients seem to activate more attention-related brain networks rather than classical speech-related areas.