gms | German Medical Science

24. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

14.09. - 17.09.2022, Erfurt

Deep learning for the automatic classification of ECAP recordings

Meeting Abstract

Suche in Medline nach

  • presenting/speaker Joachim Thiemann - Advanced Bionics GmbH, Hannover, DE
  • Gunnar Geißler - Advanced Bionics, Hannover, DE
  • Raphael Koning - Advanced Bionics, Hannover, DE

Deutsche Gesellschaft für Audiologie e.V.. 24. Jahrestagung der Deutschen Gesellschaft für Audiologie. Erfurt, 14.-17.09.2022. Düsseldorf: German Medical Science GMS Publishing House; 2022. Doc122

doi: 10.3205/22dga122, urn:nbn:de:0183-22dga1228

Veröffentlicht: 12. September 2022

© 2022 Thiemann et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

The measurement of electrically evoked compound action potentials (ECAP) in cochlear implants (CI) is a useful tool for diagnostic purposes, both intra- and postoperatively, and to assist in CI fitting. From the ECAP recordings, diagnostic or fitting software usually derives the ECAP by finding the peak-to-peak amplitude between two key points in the response, the negative peak N1 and the positive peak P2. Usually for a given channel on a CI electrode multiple ECAPs are measured at different levels of stimulation current, and the measurements are combined to determine the minimum current level at which the auditory nerve responds to electrical stimulation. For this purpose, it is necessary to distinguish recordings which show a evoked response versus those that do not show an evoked response.

However, there are conditions under which the measurement response does not fit the usual pattern where the negative peak N1 and positive peak P2 are clearly identifiable, or the peak levels are distorted to a degree that the ECAP cannot be accurately measured from the recording. In addition to random noise, these distortion can be systematic which cannot be mitigated by averaging. In order to derive clinically useful information from ECAP recordings, it is necessary to identify unusable responses, and ignore these for further processing steps.

In the present contribution, we describe a deep neural net (DNN) based system that classifies ECAP recordings in one of three classes (Response, NoResponse, Artefact), which has been trained on responses classified by experienced audiologists. We compare the performance of the DNN based classifier to a heuristic classifier and a PCA based linear classifier, and show that the DNN based approach more closely matches human decision making and significantly affects threshold estimates derived from the ECAP recordings.

Funding: This research was supported through KI-SIGS (project 01MK20012S, AP380).

Keywords: cochlear implants, ECAP, deep learning