gms | German Medical Science

27. Jahrestagung der Deutschen Gesellschaft für Audiologie
und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen

Deutsche Gesellschaft für Audiologie e. V. und ADANO

19. - 21.03.2025, Göttingen

Perception of emotional expression in cochlear implant users

Meeting Abstract

Suche in Medline nach

  • presenting/speaker Celina Isabelle von Eiff - Friedrich-Schiller-Universität Jena, Jena, Deutschland

Deutsche Gesellschaft für Audiologie e. V. und ADANO. 27. Jahrestagung der Deutschen Gesellschaft für Audiologie und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen. Göttingen, 19.-21.03.2025. Düsseldorf: German Medical Science GMS Publishing House; 2025. Doc217

doi: 10.3205/25dga217, urn:nbn:de:0183-25dga2178

Veröffentlicht: 18. März 2025

© 2025 von Eiff.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Speech comprehension counts as a benchmark outcome of cochlear implants (CIs), but this disregards the communicative importance of non-verbal social-communicative vocal signals. Accordingly, CI users’ ability to recognize vocal emotions remains strikingly understudied, even though this ability is essential and closely connected to quality of life in CI users. To fill this knowledge gap, we investigated vocal emotion perception in CI users and effects of facial information on this ability. In all experiments, we utilized state-of-the-art voice morphing methods to precisely control acoustic parameters in voice recordings.

Across experiments, CI users showed lower performance than normal-hearing (NH) individuals in vocal emotion perception overall, with or without facial information. Importantly, there were large interindividual differences among CI users, with low performers responding close to guessing level. Whereas NH individuals utilized timbre and fundamental frequency information to equivalent degrees when recognizing vocal emotions, CI users were more efficient in using timbre (compared to fundamental frequency) information for the same task. Some CI users could use timbre information remarkably well, demonstrating that CI devices can efficiently transmit timbre signals. Crucially, considering that emotion perception with a CI can be improved by vocal caricatures, we developed and tested a perceptual training program with caricatures as training stimuli – with promising results. We also created a substantial audiovisual (AV) database for emotional voice and dynamic face stimuli (with voices varying in emotional intensity via different morph levels, allowing adaptive testing and calibration of task difficulty) to study AV emotion perception in CI users. Compared to NH individuals, CI users exhibited stronger benefits to vocal emotion perception if time-synchronized congruent facial emotional information was available, and these larger crossmodal benefits were maintained even at equal auditory-only performance levels. Importantly, these results suggest the benefits result from deafness-related compensation rather than degraded acoustic representations. Finally, the findings confirmed the positive relationship between vocal emotion recognition abilities and quality of life ratings in CI users.

Overall, the current studies suggest AV stimuli are beneficial during CI rehabilitation. Moreover, they demonstrate that morphing, and specifically caricaturing, provides novel perspectives not only for assessing sensory determinants of human communication but also for improving perception of emotional expression and, ultimately, quality of life in CI users.