Artikel
Digital assistants for hearing aid wearers based on cloud-based artificial intelligence
Digitale Assistenten für Hörgeräteträger auf Basis von Cloud-gestützter Künstlicher Intelligenz
Suche in Medline nach
Autoren
Veröffentlicht: | 11. Juni 2024 |
---|
Gliederung
Abstract
In recent years, hearing aids have undergone enormous development: innovative signal processing and precise speech recognition have led to considerably improved speech understanding. At the same time, the fitting process has not changed significantly: a recommendation for suitable gain settings is mostly determined using prescriptive fitting formulae based on the wearers’ audiogram. However, this approach neglects differences in loudness perception, noise tolerance, and individual sound preferences. These issues can be addressed when end-users fine-tune their hearing aids via smartphone app-based digital assistants, which offer several advantages over fine-tuning by hearing care professionals. First, digital assistants allow highly individualized adaptations provided by artificial intelligence (AI). Second, the impact of memory bias is reduced as they can be directly used in the acoustically challenging situation. Finally, the applied setting updates can be evaluated directly, and hearing aid wearers may accept or reject the updates. In this short report, we discuss opportunities and challenges of such a digital assistant. We focus on the question of how hearing aid wearers prefer to use the digital assistant: directly in the problematic situation or afterwards. To this end, we analyze large-scale user data which shows that using the assistant in the problematic situation and afterwards are both popular. To meet these user expectations, we show how both modes of operation can be implemented in the digital assistant. Our findings highlight the need for validating app design in the field to maximize the usefulness of digital assistance systems.
Zusammenfassung
Hörgeräte haben sich in den letzten Jahren enorm weiterentwickelt: Innovative Signalverarbeitung und präzise Spracherkennung ermöglichen ein deutlich verbessertes Sprachverstehen. Gleichzeitig hat sich der Anpassprozess nicht wesentlich verändert: Eine Empfehlung für eine geeignete Verstärkungseinstellung wird meist mit Hilfe von Anpassformeln unter Verwendung des Audiogramms ermittelt. Dadurch werden jedoch mögliche Unterschiede in Bezug auf Lautheitswahrnehmung, Lärmtoleranz sowie individuelle Klangvorlieben vernachlässigt. Diese Nachteile lassen sich beheben durch einen digitalen Assistenten in Form einer Smartphone-App, welcher eine Endnutzer-gesteuerte Feinanpassung ermöglicht. Daraus ergeben sich mehrere Vorteile gegenüber der Feineinstellung durch den Hörakustiker. Erstens ermöglichen digitale Assistenten mithilfe von Künstlicher Intelligenz (KI) gezielte Anpassungen an das individuelle Empfinden. Zweitens lässt sich auch der Einfluss von Erinnerungsverlusten reduzieren, da sie direkt in akustisch schwierigen Situationen eingesetzt werden können. Darüber hinaus können die vorgenommenen Anpassungen direkt bewertet werden, und der Hörgeräteträger kann die Änderungen akzeptieren oder ablehnen. In diesem Kurzbericht diskutieren wir die Chancen und Herausforderungen eines solchen digitalen Assistenten. Wir konzentrieren uns auf die Frage, wie Hörgeräteträger den digitalen Assistenten am liebsten nutzen: Direkt in der problematischen Situation oder eher im Nachhinein? Zu diesem Zweck analysieren wir umfangreiche Nutzerdaten, die zeigen, dass der Assistent sowohl in problematischen Situationen als auch danach verwendet wird. Um diesen Nutzererwartungen gerecht zu werden, zeigen wir, wie beide Varianten in den digitalen Assistenten implementiert werden können. Unsere Ergebnisse unterstreichen die Notwendigkeit, das App-Design in der Praxis zu validieren, um den Nutzen von digitalen Assistenzsystemen zu maximieren.
Artificial intelligence(AI)-based hearing care support
Hearing aids are usually fitted and optionally fine-tuned in a quiet office or lab environment. However, problems with audibility or handling of the hearing aids usually occur in more challenging daily-life situations and listening environments. To allow for a reasonable fine-tuning of the hearing aid settings, a wearer must remember specific problems and corresponding situations and accurately report these during a follow-up visit to the hearing care professional. This gap can be bridged by smartphone-based digital assistants, which empower the wearer to fine-tune their hearing aids directly in the situation as required, providing instant improvements which both match the acoustical context and are customized to the wearer’s preferences. Hearing care professionals can thereby save time by reducing the number of follow-up visits while maximizing the value of each visit by allowing them to focus on the personal aspects of hearing care.
The digital assistant presented here is integrated into a smartphone app which also serves as a remote control for hearing aids. A simple user interface (Figure 1 [Fig. 1]) allows easy accessibility for wearers of all ages. With the help of a chatbot, the wearer can, for example, describe a problem with the listening experience in a particular situation or a handling problem. The solution finding process starts with a predefined list of issues and two to three follow-up questions to narrow down the exact problem. Depending on the nature of the problem, the app then either assists the wearer by providing relevant information in the form of text hints and short video clips (for handling problems) or uses a cloud-based AI system to suggest updated settings to improve the listening experience. These updates are immediately applied to the hearing aids, so that the wearer can directly evaluate them in the respective situation. The user is then asked whether the new settings should be retained or discarded. If still technically possible and desired, the suggested solution can be applied again, or an alternative solution is offered. At the next follow-up visit, a list of encountered problems as well as changes made by the assistant is visible to the hearing care professional when reading out the hearing aids via the fitting software (see Figure 2 [Fig. 2]). This enables a more detailed understanding of customer needs and thus creates additional value to support the fitting and fine-tuning process.
Improving hearing aid settings based on AI
The digital assistant draws on extensive experience from external market expertise, academic data and decades of research and development (R&D) to propose solutions that deliver the greatest benefit. This includes literature on typical fine-tuning problems and solutions from hearing care professionals [1], [2], describing the most relevant issues and typical countermeasures. Proprietary solutions used with traditional fitting software, such as Basic Tuning and Fitting Assistant have also been reviewed and considered. In addition, general knowledge was gathered from experienced hearing care professionals in customer service and R&D. Based on these sources, specific fine-tuning measures in terms of amplification, compression, noise reduction and automatic directional microphones as well as appropriate step sizes were determined, so that all solutions result in perceptible improvements and reasonable limits and the changes are in line with the hearing care professional’s intentions. End users can only change the current hearing aid settings within predefined limits to prevent the gain from becoming too low or too high and to ensure that the primary responsibility remains with the hearing care professional.
In an early stage of prototyping, it was found that four problem classifications were sufficient to address ~80% of the most important problems of hearing aid wearers without overwhelming the user. Structured differentiation strategies are subsequently used to trigger the most appropriate mitigation path. To understand the full context, the assistant not only considers the problem description (sound source and attribute) and the current settings, but also incorporates objective parameters of the current situation (e.g., the output of a situation classifier and level information, read out from the hearing aids) as well as relevant individual information about the wearer (Figure 3 [Fig. 3]). This information is sent to a cloud-based AI system where a deep neural network provides a fine-tuning suggestion tailored to the wearer and the acoustic context. To illustrate an example problem and suitable solutions, consider the possible problem “sound in general is perceived as (too) sharp”. In this case, the assistant could, e.g., suggest an increase in low frequency gain or a decrease of high frequency amplification, either overall or only for certain input levels. Each of these possible solutions are suitable for decreasing the perceived sharpness. However, some solutions are only applicable in certain acoustic environments, e.g., increasing or decreasing gain for medium or loud input levels makes no sense in quiet environments.
How is the assistant used? Analysis of user interactions
Anonymous data collected from the digital assistant’s user interactions allow monitoring hearing aid wearers’ problems and how successful the assistant is in solving them. We have previously presented usage data which indicates that the assistant is mostly used to finetune new hearing aids and allows hearing aid wearers to reach their individually preferred settings [3]. In addition, combining the problem statements with the feedback on the proposed solutions enables constant improvements to the solutions offered by the assistant. In the following, we discuss another important factor accessible via user data: whether hearing aid wearers prefer to use the digital assistant directly in the problematic situation or not.
The idea underlying the design of the digital assistant is to empower hearing aid wearers by giving them the means to improve their hearing experience directly within problematic situations. The initial design of the smartphone app follows this concept: the solutions provided by the assistant are based on information read out from the hearing aids which describes the acoustic environment, and the solutions are tuned to the specific acoustic conditions. For example, if the assistant is used in a loud environment, then it may suggest updates to the compression settings which only affect the sound processing in loud situations. Thus, if the assistant is used while acoustic conditions differ from the target situation, the solutions may not be effective in solving the problem.
Detecting and improving usage after the problematic situation
The acoustic scene classifier on the hearing aids provides an insight into whether the stated problem matches the current acoustic context. The classifier categorizes the current situation into one of six categories, including “Quiet”, “Speech in Quiet”, and “Speech in Noise” (Figure 4 [Fig. 4]). The usage data shows that for more than 40% of the interactions, the environment is classified as “Quiet”.
We can analyze the relationship between the problem and the environment by looking at the correlation of detected acoustic class with the problematic sound sources that users can select (Figure 5 [Fig. 5]). Again, most interactions take place in quiet environments, regardless of the specific problem. This is surprising as some problems (e.g., “Loud Noises”) cannot realistically occur in quiet scenes.
We can estimate the fraction of interactions in which assistant is not used directly in the problematic situation by defining pairings of problem statements and acoustic class which are in clear contradiction and are thus unlikely to occur (Figure 5 [Fig. 5], left, hatched areas). For example, problem statements concerning other voices should occur when the acoustic scene classifier also detects speech activity. A conservative estimate of how often there is such a contradiction indicates that the assistant is used after the problematic situation in more than a quarter of the cases (Figure 5 [Fig. 5], right). In these cases, the proposed solutions would not help with the actual problem of the hearing aid wearer.
This data shows that the digital assistant is not always used as intended, highlighting a difference between its design concept and the way assistant users intuitively understand (or prefer to use) it. The reasons for these preferences remain to be explored but could include factors like politeness (using a smartphone in some social situations, especially conversations, might not be possible) or lack of time while the problematic situation is ongoing. Using the digital assistant afterwards means hearing aid wearers have enough time to work with the assistant without any distraction and avoid giving the impression of being impolite. In any case, the data suggests the assistant should be modified accordingly.
Aligning the assistant’s behavior with users’ expectations
To allow usage of the assistant both according to the original design and according to users’ expectations, the assistant’s dialog was amended and now includes a new question: are users experiencing the problem at the same time as they are using the assistant (Figure 6 [Fig. 6])? Depending on the answer, different solutions will be proposed by the digital assistant:
- If the problematic situation is currently ongoing: the cloud-based machine learning backend will take the current acoustic environment into account and will suggest solutions which are specifically tuned to the current situation.
- Otherwise, the suggested solutions will be somewhat broader in scope and will affect all possible hearing situations. However, they cannot be tuned specifically to the problematic situation.
This change aligns the operating principle of the digital assistant with its users’ expectations and allows them to perform settings updates in their individually preferred manner.
Discussion
We have presented a digital assistant [4] which allows hearing aid wearers to fine tune their devices according to their individual preferences. While the design of the assistant originally supported only modifications directly in the problematic situation, usage data collected from the smartphone app indicated that in many cases users prefer to use the assistant afterwards. We have shown how the digital assistant has been updated to give users the option to choose between both variants. This highlights both the necessity of validating tools like the presented digital assistant in the field and the opportunities which arise from the collection of usage data. We expect that this update increases the digital assistant’s usefulness both for hearing aid wearers (by allowing them to fine tune their devices whenever they wish) and for the hearing care professionals (by further improving customer satisfaction).
Notes
Conference presentation
This contribution was presented at the 26th Annual Conference of the German Society of Audiology and published as an abstract [5].
Competing interests
The authors declare that they have no competing interests.
References
- 1.
- Jenstad LM, Van Tasell DJ, Ewert C. Hearing aid troubleshooting based on patients’ descriptions. J Am Acad Audiol. 2003 Sep;14(7):347-60.
- 2.
- Thielemans T, Pans D, Chenault M, Anteunis L. Hearing aid fine-tuning based on Dutch descriptions. Int J Audiol. 2017 Jul;56(7):507-15. DOI: 10.1080/14992027.2017.1288302
- 3.
- Wolf V, Mueller M. End-user controlled Fine-Tuning of Hearing Instruments – Opportunities and Challenges for an interactive Digital Assistant. In: DAGA 2023 – 49. Jahrestagung für Akustik; 2023 Mar 6-9; Hamburg, Germany. p. 170-3. Available from: https://pub.dega-akustik.de/DAGA_2023/data/articles/000182.pdf
- 4.
- Høydal EH, Fischer RL, Wolf V, Eric Branda E, Aubreville M. Empowering the Wearer: AI-based Signia Assistant Allows Individualized Hearing Care. In: The Hearing Review. 2020 Jul 15. Available from: https://hearingreview.com/hearing-loss/patient-care/hearing-fittings/empowering-the-wearer-ai-based-signia-assistant-allows-individualized-hearing-care
- 5.
- Wolf V. Digitale Assistenten für Hörgeräteträger auf Basis von Cloud-gestützter KI. In: Deutsche Gesellschaft für Audiologie e.V., editor. 26. Jahrestagung der Deutschen Gesellschaft für Audiologie. Aalen, 06.-08.03.2024. Düsseldorf: German Medical Science GMS Publishing House; 2024. Doc019. DOI: 10.3205/24dga01