Article
Federated learning in audiology: Supervised and unsupervised applications on audiological databases
Search Medline for
Authors
Published: | March 18, 2025 |
---|
Outline
Text
A large variety of data is collected in clinical as well as research settings, including studies with large numbers of patients. These data are typically collected in local, decentralized databases, exhibiting different data structure, choice of tests, or varying data quality. If these data were analyzed together, knowledge about many patients worldwide could be exploited to obtain a representative overview of existing patient patterns, and classifications could provide probabilities for hearing loss or hearing device categories to support clinical decision-making. However, the decentralized databases are often subject to data privacy restrictions, and data from different sources cannot be directly combined, for example for training machine learning models.
Federated learning aims to overcome these restrictions by training a model (e.g., supervised classification or unsupervised clustering) based on each local database separately – one could say, the “model comes to the data“. The learned parameters are then combined to a global model that captures the properties of all local databases, and applied to perform the classification or clustering task on all data. Two studies will be presented that introduce federated learning to audiology, for unsupervised clustering and supervised classification.
Auditory profiles are a data-driven, unsupervised method to characterize patients in audiological databases by the combination of different audiological test results, such as audiogram, speech test, and loudness scaling [1]. The generation of profiles is based on model-based clustering, resulting in most distinct groups of patients according to their data. Saak et al. (under review) [2] developed an approach for merging the generated profiles obtained from several databases, while only considering information about distributions of the features within the profiles. It was shown that the resulting profiles from two databases plausibly extend the previous set of profiles, e.g., by capturing more severe hearing loss patterns in additional profiles. The merging of auditory profiles has several crucial properties towards federated merging of audiological databases, providing an example of an unsupervised federated learning approach in audiology.
For a supervised classification based on several audiological databases, i.e., into hearing loss or hearing device categories, a federated learning approach is required that captures the properties of the databases, and includes a suitable and interpretable design of features. We propose a starting point for a federated learning framework for classification and show first results of its application to distributed subsets of a large hearing aid acoustician database.
Both approaches contribute to characterizing the hearing-impaired population based on decentralized audiological databases, which contain important statistical knowledge that can be exploited to support audiological diagnostic decisions.
References
- 1.
- Saak S, Huelsmeier D, Kollmeier B, Buhl M. A flexible data-driven audiological patient stratification method for deriving auditory profiles. Front Neurol. 2022 Sep 15;13:959582. DOI: 10.3389/fneur.2022.959582
- 2.
- Saak S, Oetting D, Kollmeier B, Buhl, M. Integrating audiological datasets via federated merging of Auditory Profiles [preprint]. arXiv. 2024. DOI: 10.48550/arXiv.2407.20765