Article
Integrating audiological databases via Auditory Profile generation
Search Medline for
Authors
Published: | March 5, 2024 |
---|
Outline
Text
Audiological databases contain valuable knowledge about hearing loss patients that can be exploited to learn about patterns in the data, e.g., for identifying patient groups that exhibit similar combinations of audiological test outcomes and may therefore benefit from a similar treatment. In a previous study, we developed an approach to summarize patient information from one audiological database into distinct Auditory Profiles (APs) [1]. To cover the complete audiological patient population, however, patient patterns need to be analyzed across multiple larger datasets, and finally, be integrated into an adequately combined set of APs.
This study aimed at extending the existing profile generation pipeline [1] with an AP merging step, which allows to combine APs, generated from different datasets, based on their similarity across audiological measures. The 13 previously generated APs (N=595) were combined with 31 newly generated APs from a second dataset (N=1272). Overlapping densities of common features across the two datasets (speech test, loudness scaling, audiogram, age) were used to calculate a similarity score. Based on the similarity score, profiles were merged with the most similar. To ascertain applicability of the profile information in clinical practice, random forest classification models were built which allow classification into the generated APs for different scenarios. Different scenarios include different measurement combinations, e.g., using all features, using features generally available for hearing aid acousticians, and using singular measures, such as only a speech test.
A new set with 13 combined APs is proposed, as these resulted in well separable profiles, which still capture detailed patient information. The classification performance across these profiles is high for most feature sets. The best performance was achieved by combining information from loudness scaling with the SRT. Feature importance analysis reveals the most important features for patient characterization within the current profile set, which are related to speech testing, loudness scaling and the audiogram.
The enhanced auditory profile generation pipeline demonstrates the feasibility of combining auditory profiles across datasets, which should generalize to all datasets and could allow for an interpretable, population-based profile set in the future. The classification models maintain clinical applicability for a variety of settings, such as availability of all features, features potentially available on smartphones, and features available at hearing aid acousticians. Hence, an application of finding the appropriate auditory profile for a given patient becomes possible, even if no clinical measures, but only smartphone-based self-assessed measures are available.
References
- 1.
- Saak S, Huelsmeier D, Kollmeier B, Buhl M. A flexible data-driven audiological patient stratification method for deriving auditory profiles. Front Neurol. 2022 Sep 15;13:959582. DOI: 10.3389/fneur.2022.959582