gms | German Medical Science

26. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

06.03. - 08.03.2024, Aalen

Integrating audiological databases via Auditory Profile generation

Meeting Abstract

Suche in Medline nach

  • presenting/speaker Samira Saak - Carl von Ossietzky Universität Oldenburg, Medizinische Physik, Oldenburg, Germany; Carl von Ossietzky Universität Oldenburg, Cluster of Excellence “Hearing4all”, Oldenburg, Germany
  • Mareike Buhl - Institut de l’Audition, Insitut Pasteur, Centre de Recherche et d’Innovation en Audiologie Humaine, Paris, France
  • Birger Kollmeier - Carl von Ossietzky Universität Oldenburg, Medizinische Physik, Oldenburg, Germany; Carl von Ossietzky Universität Oldenburg, Cluster of Excellence “Hearing4all”, Oldenburg, Germany

Deutsche Gesellschaft für Audiologie e.V.. 26. Jahrestagung der Deutschen Gesellschaft für Audiologie. Aalen, 06.-08.03.2024. Düsseldorf: German Medical Science GMS Publishing House; 2024. Doc186

doi: 10.3205/24dga186, urn:nbn:de:0183-24dga1864

Veröffentlicht: 5. März 2024

© 2024 Saak et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Audiological databases contain valuable knowledge about hearing loss patients that can be exploited to learn about patterns in the data, e.g., for identifying patient groups that exhibit similar combinations of audiological test outcomes and may therefore benefit from a similar treatment. In a previous study, we developed an approach to summarize patient information from one audiological database into distinct Auditory Profiles (APs) [1]. To cover the complete audiological patient population, however, patient patterns need to be analyzed across multiple larger datasets, and finally, be integrated into an adequately combined set of APs.

This study aimed at extending the existing profile generation pipeline [1] with an AP merging step, which allows to combine APs, generated from different datasets, based on their similarity across audiological measures. The 13 previously generated APs (N=595) were combined with 31 newly generated APs from a second dataset (N=1272). Overlapping densities of common features across the two datasets (speech test, loudness scaling, audiogram, age) were used to calculate a similarity score. Based on the similarity score, profiles were merged with the most similar. To ascertain applicability of the profile information in clinical practice, random forest classification models were built which allow classification into the generated APs for different scenarios. Different scenarios include different measurement combinations, e.g., using all features, using features generally available for hearing aid acousticians, and using singular measures, such as only a speech test.

A new set with 13 combined APs is proposed, as these resulted in well separable profiles, which still capture detailed patient information. The classification performance across these profiles is high for most feature sets. The best performance was achieved by combining information from loudness scaling with the SRT. Feature importance analysis reveals the most important features for patient characterization within the current profile set, which are related to speech testing, loudness scaling and the audiogram.

The enhanced auditory profile generation pipeline demonstrates the feasibility of combining auditory profiles across datasets, which should generalize to all datasets and could allow for an interpretable, population-based profile set in the future. The classification models maintain clinical applicability for a variety of settings, such as availability of all features, features potentially available on smartphones, and features available at hearing aid acousticians. Hence, an application of finding the appropriate auditory profile for a given patient becomes possible, even if no clinical measures, but only smartphone-based self-assessed measures are available.


References

1.
Saak S, Huelsmeier D, Kollmeier B, Buhl M. A flexible data-driven audiological patient stratification method for deriving auditory profiles. Front Neurol. 2022 Sep 15;13:959582. DOI: 10.3389/fneur.2022.959582 Externer Link