gms | German Medical Science

26. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

06.03. - 08.03.2024, Aalen

Decoding of selective attention to speech in CI-patients using linear and non-linear methods

Meeting Abstract

  • presenting/speaker Constantin Jehn - FAU Erlangen, Chair of Sensory Neuroengineering, Erlangen, Germany
  • Adrian Kossmann - UKD, Sächsisches Cochlear Implant Centrum (SCIC), Dresden, Germany
  • Anja Hahne - UKD, Sächsisches Cochlear Implant Centrum (SCIC), Dresden, Germany
  • Niki Katerina Vavatzanidis - UKD, Sächsisches Cochlear Implant Centrum (SCIC), Dresden, Germany
  • Tobias Reichenbach - FAU Erlangen, Chair of Sensory Neuroengineering, Erlangen, Germany

Deutsche Gesellschaft für Audiologie e.V.. 26. Jahrestagung der Deutschen Gesellschaft für Audiologie. Aalen, 06.-08.03.2024. Düsseldorf: German Medical Science GMS Publishing House; 2024. Doc149

doi: 10.3205/24dga149, urn:nbn:de:0183-24dga1499

Veröffentlicht: 5. März 2024

© 2024 Jehn et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Research question: Recent research showed that selective attention to speech can be decoded from non-invasive EEG recordings [1]. Such attention decoding could potentially be applied in neuro-steered cochlear implants to direct signal processing there and aid the wearer in better understanding speech in noisy environments. To pave the way for such devices, it is necessary to develop and validate decoding strategies catering to hearing-impaired patients. Here we examine linear and non-linear decoding methods for selective attention in a competing speaker scenario to investigate their efficacy specifically for bimodal CI users.

Methods: EEG data was collected from 15 bimodal cochlear implant patients exposed to two competing speech stimuli emanating from spatially-separated speakers, each segment lasting two minutes. Patients were directed to focus on one speech stream during a given segment. Attention was assessed through comprehension questions after each segment.

For data analysis, we first explored a linear forward model in which we computed the temporal response function (TRFs) to estimate the EEG signal from the speech envelope. We further employed independent component analysis (ICA) to identify and exclude electrical artifacts from the CI. Decoding of selective attention was then achieved through a regularized linear backward model. Second, we also explored a convolutional neural network (CNN) as a non-linear model for reconstructing the speech envelope from the EEG recordings, and therefrom the focus of attention.

Results: The ICA-cleaned EEG data revealed significant peaks in the TRF at approximately 90 ms and 180 ms, consistent with findings observed in healthy participants [2]. The stimulation artifacts identified by ICA were significant, surpassing the neural response by an order of magnitude. The focus of attention could be decoded successfully based on the performance of the linear backward model, and the accuracy of the attention decoding increased with longer data duration. Moreover, we found that the CNN achieves yet higher decoding accuracies, mirroring outcomes observed in studies involving healthy participants [2].

Conclusion: In conclusion, our research confirms the supremacy of non-linear methods for decoding selective attention in bimodal cochlear implant users. However, notable challenges lie ahead on the journey towards neuro-steered cochlear implants, highlighted by moderate mean decoding accuracies and substantial variability among participants.


References

1.
O'Sullivan JA, Power AJ, Mesgarani N, Rajaram S, Foxe JJ, Shinn-Cunningham BG, Slaney M, Shamma SA, Lalor EC. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG. Cereb Cortex. 2015 Jul;25(7):1697-706. DOI: 10.1093/cercor/bht355 Externer Link
2.
Thornton M, Mandic D, Reichenbach T. Robust decoding of the speech envelope from EEG recordings through deep neural networks. J Neural Eng. 2022 Jul 6;19(4). DOI: 10.1088/1741-2552/ac7976 Externer Link