gms | German Medical Science

GMS Zeitschrift für Audiologie — Audiological Acoustics

Deutsche Gesellschaft für Audiologie (DGA)

ISSN 2628-9083

Advantages of direct acoustic streaming during telephone calls of bimodal hearing instrument users

Research Article

  • corresponding author Melanie A. Zokoll - Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
  • Markus Meis - Cochlear Deutschland GmbH & Co. KG, Hannover, Germany
  • Kirsten C. Wagener - Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
  • Silke Grober - Universitätsklinik für Hals-Nasen-Ohren-Heilkunde am Evangelischen Krankenhaus, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Exzellenzcluster „Hearing4All“, Oldenburg, Germany
  • Andreas Radeloff - Universitätsklinik für Hals-Nasen-Ohren-Heilkunde am Evangelischen Krankenhaus, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Exzellenzcluster „Hearing4All“, Oldenburg, Germany; Forschungszentrum Neurosensorik, Oldenburg, Germany
  • Horst Hessel - Cochlear Deutschland GmbH & Co. KG, Hannover, Germany

GMS Z Audiol (Audiol Acoust) 2023;5:Doc03

doi: 10.3205/zaud000029, urn:nbn:de:0183-zaud0000298

This is the English version of the article.
The German version can be found at: http://www.egms.de/de/journals/zaud/2023-5/zaud000029.shtml

Published: January 31, 2023

© 2023 Zokoll et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Abstract

The present study investigated whether direct bilateral streaming on cochlea implant (CI) and hearing aid provides bimodal hearing instrument users with an advantage when making phone calls via smartphone compared with previously used methods.

In two laboratory tests, 22 experienced CI listeners (mean age 50.8±18.6 years), fitted bimodally with a Cochlear N7 processor, as well as a GN ReSound hearing aid, were tested with their own previously used, mostly unilateral phone use (reference condition) and with MFi technology (test condition). In the test condition, signals were directly streamed from a smartphone to both hearing instruments. Speech intelligibility, subjectively perceived listening effort, each in interfering noise, and subjective ratings (including sound, usability via System Usability Scale, SUS) were obtained as key measures of benefit.

The results showed a trend towards improved speech intelligibility, as well as a significantly lower listening effort with bilateral streaming compared to conventional telephone use in both the measurement and the questionnaire data. In terms of usability, bilateral streaming also resulted in better ratings compared to the reference.

Keywords: cochlear implant, MFi, streaming, telephoning with CI


Introduction

Making telephone calls is a great challenge for people with severe hearing loss. On the one hand, there are no visual cues (lip movements, gestures, facial expressions) that can be obtained by observing the speaker, and on the other hand, the voices are often unfamiliar and thus more difficult to understand than familiar voices [12], [18]. In addition, the quality of the transmitted speech signals can be affected. Analogue telephones, for example, are limited in their bandwidth (300–3,400 Hz). In addition, especially with mobile telephony, it is not uncommon for background noise on the part of the caller or called party to interfere with the conversation. As a consequence, many people with increasing hearing loss do not use the telephone at all, or only use the telephone with the help of others. The restrictions on communication can considerably impair the quality of life and lead to social isolation, especially for seniors who live alone. Cochlear implantation in postlingually deafened adults is indicated “when cochlear implants (CI) are likely to provide better hearing and speech understanding than hearing aids” [10]. Sufficient restoration of the ability to communicate also includes the ability to make telephone calls [24]. According to Lenarz [24], the latter is only possible with a (monosyllabic) speech understanding of >50% at a signal level of 65 dB.

Studies have shown that, thanks to the technological development of the systems, CI users often regain open speech understanding after their implantation, including the use of the telephone [24], and also reach for the telephone receiver or smartphone more frequently again [2]. Clinkard et al. [8] found that telephone use among CI patients is increasing compared to previous studies. In a study by Sousa et al. [32], telephone speech understanding was associated with quality of life. Patients who reported successful telephone use showed higher average scores in the psychological, social and global domains of one of the quality-of-life questionnaires used (Nijmegen Cochlear Implant questionnaire, NCIQ-P). Granberg et al. [15] see the successful use of telephony as an essential prerequisite for the ability to participate according to the International Classification of Functioning, Disability and Health (ICF).

There are several established ways to make phone calls with CIs. One method that has been used for a long time is telephoning with a telecoil. Less technology-savvy CI users also use the telephone loudspeaker, which is then usually aligned with the microphone on one of the audio processors. A more modern option is to use a Bluetooth assistive device to make phone calls, where the audio signals are transmitted from the mobile phone to the assistive device using Bluetooth and then on to the signal processor using Bluetooth Low Energy (BLE) or other 2.4 GHz wireless technology (e.g., CochlearTM Wireless Phone Clip, MED-EL AudioLink).

Hearing aid users also have comparable options for making phone calls. For example, Bluetooth assistive devices that connect to a Bluetooth-enabled phone and then transmit the signal to the hearing instruments via an induction loop/antenna (around the neck of the hearing aid wearer) (e.g., ComPilot from Phonak, uDirect 3 from Unitron or Hansaton). Telephone clips (e.g., GNReSound Unite telephone clip, Oticon ConnectClip) enable wireless telephoning via Bluetooth transmission between Bluetooth-enabled smartphone and telephone clip, as well as 2.4 GHz wireless technology between telephone clip and hearing instruments.

A new addition is Bluetooth transmission without a Bluetooth auxiliary device, where the audio signals can be transmitted via Bluetooth Classic (e.g., with Phonak, Advanced Bionics) or BLE directly into the signal processor. The latter works, for example, for Apple devices (Apple Bluetooth Low Energy, ABLE) such as the iPhone, or newer Android mobile phones.

Streaming the phone signal (directly) can improve phone use for CI users compared to telecoil or acoustic coupling configurations [25], [38]. This has also been shown for hearing aid users [30]. The sounds in the hearing aid wearer's environment (picked up by the hearing aid microphones) are usually mixed in a low ratio with the streamed signal when streaming. Picou and Ricketts [30] had even turned off the hearing aid microphones for the streaming condition to exclude ambient noise that could corrupt the advantage of streaming (e.g., [39]). The default for CI users is a mixing ratio between streaming signal and acoustic input that allows CI users to continue to hear what is going on around them (e.g., 2:1 ratio [36]).

Bimodal patients who use a CI and a hearing aid contralaterally face very similar, sometimes even greater, problems when using the telephone, as two different, possibly incompatible systems are used at the same time. In recent years, cooperation between CI and HG manufacturers has been established, which has improved the possibilities for interaction between the two systems.

At the Hörzentrum Oldenburg, in cooperation with the University ENT Clinic Oldenburg, it was investigated to what extent bilateral streaming brings an improvement to bimodally fitted patients compared to standard methods of telephoning (reference) in terms of audiological benefit (speech intelligibility and listening effort), sound quality, and user-friendliness. Special attention was paid to the influence of the coupling and the (different) paths of telephone signal and ambient noise. For the streaming condition, two streaming-capable hearing instruments were used, the Nucleus® 7 sound processor from Cochlear, and the LiNX 3D from GN ReSound, which allow bimodal patients to stream bilaterally to the CI and hearing aid.


Material and methods

Participants

Participants were recruited at the University Department of Otorhinolaryngology Oldenburg. A total of 22 adults (mean age 50.8±18.6 years, 9 female, 13 male) participated in this study. All participants were bimodal, i.e., they were fitted with a cochlear implant on one side and a hearing aid on the opposite side. One person did not wear their hearing aid on a regular basis. Prior to entering the study, participants were predominantly using the Cochlear Nucleus® 6 sound processor (N=16), with some using the Nucleus® 5 (N=5), or Nucleus® 7 (N=1). In the study, all participants received the Cochlear Nucleus® 7 sound processor. Further inclusion criteria were a CI use duration of at least two years, German as native language, no cognitive impairment and a speech comprehension at rest of more than 50% for the Oldenburg Sentence Test at 65 dB SPL (OLSA, [34]). The latter criterion was chosen for the focus on telephoning based on Lenarz [24].

Procedure and setup

The test protocol can be seen in Figure 1 [Fig. 1]. The study was organized in two appointments (T1 and T2). Apart from a provided test smartphone (Apple iPhone 7), on the first appointment (T1, reference), phone use was only investigated with the equipment that the participants normally use in everyday life. Although the hearing instruments can in principle also be used with Android smartphones, only one system was used here to better control the influence of the smartphone on the measurements. The participants filled out questionnaires about their hearing systems and the corresponding accessories (Q1) as well as about reference telephony (Q2, see also Figure 1 [Fig. 1]). Based on this survey of the participants' current telephony status, the reference was determined individually. On the second examination date (T2, streaming), the participants were tested with the study hearing systems, GN ReSound LiNX 3D LT962-DRW or LT988-DW and Cochlear N7 processor. For this purpose, the study devices were first fitted individually. The participants then paired their devices with the test smartphone, which was then used to make phone calls.

In both appointments, the participants were called by the test leader (TL) and adjusted the sound level of the smartphone to create a pleasant impression of loudness. Then they evaluated the setting of the telephone conversation with the TL under laboratory conditions with regard to acceptance (Q3, e.g., speech intelligibility, listening effort, sound quality and loudness).

At the second appointment, an initial familiarisation phase took place, which included a walk with telephone conversations outside the laboratory. The listening situations included a quiet situation in the garden of the Hörzentrum Oldenburg, a busy street intersection and a restaurant. The interlocutor was either the test leader, a staff member of the Hörzentrum Oldenburg or someone from the participant’s environment. Subsequently, the participants had to answer questionnaires on usability (System Usability Scale, SUS [7]; Q5), also in comparison to the reference (Q6; SUS_comp). The System Usability Scale is a generic procedure for assessing usability with 10 items from 0 to 4, and a multiplication of the partially inverted values by a factor of 2.5, resulting in a range of 0 to 100 points. This summed value can then be related to other studies.

Then, in both sessions, the participants completed a telephone listening task in a quiet indoor room, in which a female voice read a text from “Nils Holgersson” (audio book condition). After the participants had the opportunity to adjust the volume of the test smartphone again, they listened to this monologue and evaluated the telephony with this female speaker (Q4, analogous to Q3). Then, with the same smartphone volume setting, audiological tests of speech intelligibility and listening effort followed in randomised order. Telephone speech reception thresholds (SRTs) in noise were measured in a sound-attenuated room with the female version of the Oldenburg Sentence Test (OLSAf, [1], [35]) in open response format. The speech signal was presented via the test smartphone. For this purpose, the test smartphone was called by a telephone (Siemens OpenStage 15) connected to a PC on which the Oldenburg Measurement Applications (Hörzentrum Oldenburg gGmbH, Oldenburg) were used. The noise (Olnoisef, [35]) was presented via a loudspeaker (Mackie HR824) from 0° at a distance of 1.3 m. The noise was thus only present on the receiver side and not on the transmitter side. Possible effects of the transmitter-side transmission path on the signal and the signal-to-noise ratio were not investigated. The speech signal of the OLSAf had the same RMS level and the same speaker as the Nils Holgersson audio play, with which the level of the test smartphone was set to an individually comfortable level (see above). The noise presented directly to the test subject was calibrated to 65 dB SPL. Since the absolute presentation level of the streamed speech signal was individually set once by each test person and no knowledge is available about this absolute presentation level via CI and thus also about the combined presentation level of the streamed speech signal from CI and HG, the term i-SNR (individual SNR) is used in the following for the signal-to-noise ratio of the individually set speech presentation level to the directly presented noise. An i-SNR of 0 dB means that the speech signal, which was set as comfortably loud in a quiet environment, is reproduced in the direct noise (presentation level 65 dB SPL). In addition to the initial training of two test lists of 20 sentences each recommended for the OLSA, and in order to be able to estimate whether the speech levels chosen by the participants in the two appointments are equivalent, the speech reception threshold in quiet was determined before testing in noise (one test list, procedure A1 according to [6]). It was followed by the actual SRT-measurement in noise with one test list and additionally an extended list of 30 sentences. With the longer test list, in addition to the speech reception threshold, also the speech intelligibility curve was determined (procedure A2 according to [6]).

To obtain listening effort ratings for telephoning, the adaptive categorical listening effort scaling method (ACALES, [22]) was used in the same experimental set-up with speech and noise signals from the OLSAf. The ACALES measurement method allows for measuring the mental ‘listening load’ or listening effort that a person must expend to understand speech in noise. It includes a short training phase followed by a test list at different signal-to-noise ratios (the individual sentences are presented three times each, while input can be given after the first presentation). The task of the participants is to rate each of the presented sentences in terms of listening effort using a 13-step categorical rating scale with seven labelled categories from “effortless” to “extremely effortful” (effort scale categorical units, ESCU, from 1–13) with an additional category of “noise only”. The corresponding question was “How much effort does it require for you to follow the speaker?”. The answer is given via a touch screen. The procedure is adaptive in the sense of an automatic individual calculation of the SNR variations to be tested, which are generated by changing the speech level. The result of the measurement is an individual listening effort function that covers the SNR range for “effortless” to “extremely effortful” (for more details on the procedure, see [22]).

Hearing aid and CI programming

In the first appointment, the hearing systems were used with the programmes and settings used by the patients in everyday life. The technical functionality of the systems was checked by the test leaders before the test was carried out. Standard fitting procedures were used for programming the two devices at the beginning of the second appointment. The GN ReSound LiNX 3D LT962-DRW and LT988-DW hearing aids were fitted with only one programme (P1). This corresponded to a common everyday programme (All-Around). The fitting followed the suggestions given by the GN ReSound software (ReSound SmartFit) with the input parameters audiogram, and experienced hearing aid user (used to WDRC systems). The initial fitting was made and, if necessary, the overall gain was increased or decreased. No further fine tuning was done. Hearing aids were selected according to hearing loss on the hearing aid-supplied ear of the participants. The coupling was also adjusted to this, following the recommendations of the fitting software. A clinical engineer transferred the MAP (personalised stimulation parameters) last used with the participant's own CI processor to the Nucleus® 7 test processor using the associated fitting software from Cochlear (Nucleus® Custom Sound Fitting Software). The default setting was used for streaming. In this setting, the streamed signals from the smartphone are presented with the signals from the microphones on the speech processor in a mixing ratio of 2:1.

Data analysis and statistics

The ACALES procedure automatically fits a function with two slopes to the individual listening effort scale categorical units (ESCUs) without using the rating category “noise only”, where one of the slopes describes the course between 1 and 7 ESCUs and the second describes the course between 7 and 13 ESCUs. The crossing point between categories 5 and 9 ESCU is smoothed (details [22]). The underlying SNR and associated ESCUs can be read out. The mean functions for listening effort were derived by fitting the same function with two slopes to all individually measured listening effort ratings for the two telephony conditions.

A non-parametric statistical test, the Wilcoxon signed-rank test, was used to compare questionnaire responses for the reference and streaming conditions. After verification of the normal distribution (Shapiro-Wilk test, p>0.05), the audiological data were analysed for significant differences using parametric tests (paired T-test (one-sided), or two-factorial repeated measures (RM) ANOVA with Bonferroni correction for multiple comparisons). In the ACALES procedure, the seven named (i.e., labelled) assessment categories from ESCU 1 “effortless” to ESCU 13 “extremely effortful” were evaluated (ESCU 1, 3, 5, 7, 9, 11 and 13) and analysed for significant differences between conditions.


Results

One participant completed T1 (reference) but not T2 (streaming), another was identified as an outlier (SRTStreaming>2 SD above the mean) and showed inconsistent data between T1 and T2, or between questionnaire scores and OLSA and ACALES scores in T2. This indicated a faulty setting of the transducer within the measurement chain for the adaptive measurements. Both participants were excluded from further data analysis. The results shown below are based on a database of 20 individuals. The mean (±SD) time between T1 and T2 was 43.6±40.9 days.

Questionnaire data

Reference telephony (Q2)

The Q2 questionnaire revealed that 75% of the participants use landline telephones in everyday life and only 25% use mobile phones. In terms of the ear or side they use to make phone calls, participants reported using the hearing aid side in 65% of cases and the CI side in 35%. None of the participants regularly use Bluetooth connections and streaming or the telecoil to make phone calls, so this type of telephony did not occur in the reference. In most cases, the phone is placed on the hearing instrument’s microphones or on the earpiece (13). Others use the loudspeaker of the phone/smartphone (3) or take off the hearing system (hearing aid) and make conventional phone calls with the receiver of the phone on the pinna (4). On average, participants make one phone call per day (median between 1 and 2–3 calls per day).

Acceptance (Q3 and Q4)
Volume of the smartphone

The participants adjusted the level of the smartphone in the initial phase of each telephone conversation (with the test leader, TL, or the audio book speaker, AB) to a level that was comfortable for them individually. The resulting smartphone level after adjustment differed significantly between the reference and streaming conditions (Wilcoxon test, ZTL=–3.733, N=20, ZAB=–3.638, both p<0.001, N=19). Smartphone loudness was significantly lower for direct streaming than for the reference condition in both conditions (median at 10.5/16 (conversation with TL) and 12/16 (audio book) of scale compared to full scale (reference), see also Audiological Data section).

Loudness

When asked about subjective loudness, the participants rated the loudness when talking on the phone with the test leader or listening to the audio book as sufficient in median: this was to be expected, since they had initially set the volume individually to a comfortably loud level. However, two of the participants only achieved a low level of loudness (very quiet or too quiet) even when the smartphone was set to full scale. For one person, this was the case for both appointments. For the other, this was only the case after switching to the study devices. In this case, the study hearing aid had not been able to compensate for the hearing loss and on the CI side, the level could not be increased further due to the onset of facial nerve stimulation. Regarding one's own voice during a telephone conversation with the test person, the volume was perceived as lower (ZTL=–2.124, p<0.05, N=20) when streaming directly. When listening to the audio book via telephone, the volume of one's own voice was not queried.

Sound of voice

The sound of their own voice as well as the voice of the experimenter and the audio book speaker was perceived as pleasant by the participants in both measurement conditions in median. However, a direct comparison of the voice sound of the test leader and the audio book speaker on the basis of contrast pairs showed a significant difference in terms of clarity (contrast pair “clear – unclear”) and proximity (contrast pair “distant – close“) between the two conditions. According to the median, the voices during streaming were perceived as clearer and closer than in the reference (clarity: ZTL=–2.195, ZAB=–2.273, proximity: ZTL=–3.218, ZAB=–3.213, all p<0.05, N=19). For the audio book, in the streaming condition, the sound also tended to be perceived as more pleasant (ZAB=–1.874, p=0.061, N=19) and more voluminous (ZAB=–1.941, p=0.052, N=19). The person who did not achieve adequate speech understanding after switching to the test devices could not make this assessment.

Satisfaction with the hearing systems

Participants showed greater overall satisfaction with streaming compared to the reference without streaming. The difference was one scale unit for both, the telephone conversation, and the audio book. In both situations, the reference condition was rated median=4 and the streaming condition was rated median=5. This difference was significant in each case (ZTL=–2.380, N=20, ZAB=–2.073, N=19, both p<0.05, Figure 2A [Fig. 2]).

Speech intelligibility

Subjective speech intelligibility when talking on the phone in the relatively quiet laboratory condition was slightly better than the reference when streaming directly, both when talking to the test leader and when listening to the audio book, and changed by one scale unit in median (from 6, “very much”, to 7, “all”, for talking on the phone with the test leader, and from 5, “a lot”, to 6, “very much”, for the audio book). However, the difference was not significant. Two of the participants had no speech understanding when talking on the phone, although in one case this only occurred after switching to the study devices (see above).

Listening effort

Subjective listening effort in quiet was also assessed for the two telephone conditions. Median listening effort scores decreased by approximately 2.0 to 2.5 points on the scale for direct streaming compared to the reference condition, towards less effort (Figure 2B [Fig. 2]). For the audio book, this difference was significant (ZAB=–2.549, p<0.05, N=19).

Audiological data
Speech intelligibility

Speech intelligibility was tested with the OLSAf [1], [35] by presenting its speech signal as a phone call via the smartphone. The volume of the smartphone was set in advance to a comfortable level using the audio book and was 16 bars (full scale) on the smartphone in median for the reference condition and 12 bars for the streaming condition. This difference was significant (see section on smartphone volume). However, the thresholds for speech intelligibility in quiet for the reference and streaming conditions were similar with mean values of 52.3±5.6 dB and 51.0±2.7 dB for the reference and streaming conditions, respectively (N=18). The smartphone volume adjustment therefore seems to have resulted in comparable speech presentation levels for the two conditions.

Speech intelligibility in noise was also tested using the OLSAf. To obtain SRT in noise, speech was presented via the smartphone and noise was presented via a loudspeaker from the front at an intensity of 65 dB SPL. Only measurements that converged meaningfully were used. SRTs for reference and streaming could thus be determined for 17 participants. For phoning in noise, the streaming condition leads to significantly better speech intelligibility than the reference condition (paired T-test, T(16)=2.284, p<0.05, N=17; SRT measurements with simultaneous threshold determination). Figure 3A [Fig. 3] shows mean SRTs in noise for measurements with simultaneous estimation of the slope. These were –1.7±7.5 dB and –5.7±6.0 dB i-SNR for the reference and streaming conditions, respectively. The mean difference was 4.0±7.3 dB i-SNR. However, not all participants benefited from the streaming condition. In six of the 17 participants, either little change or a worsening of the SRT was observed in the streaming condition. In addition, the streaming condition was found to lead to significantly steeper slopes of the speech intelligibility curves (paired T-test, T(16)=–2.961, p<0.01, N=17). Figure 3B [Fig. 3] shows the mean speech intelligibility functions for the reference and streaming conditions obtained from the threshold measurements with simultaneous slope estimation by fitting a logistic function to the mean values for speech reception threshold (SRT) and slope at threshold (s50). This has the form

Equation 1

The mean slope was 7.5±3.2 %/dB and 11.4±5.5 %/dB for the reference and streaming conditions, respectively. The minimum was 2.0 and 3.0 %/dB, the maximum was 12.0 and 22.0 %/dB for the reference and streaming condition, respectively. The slopes obtained were partly very flat and the corresponding SRTs were thus limited in their reliability. If participants with flat speech intelligibility functions (slope of <5 %/dB) were excluded from the analysis, the mean SRTs in noise for the remaining twelve participants was –4.0±6.8 dB i-SNR and –7.8±4.4 dB i-SNR for the reference and streaming conditions, respectively. This difference was just below the significance level (T(11)=1.633, p=0.066, N=12). Individually, there were proportionally more participants (5/12) in this subgroup whose SRT was unchanged or worsened in the streaming condition. However, the significant increase in the slope of the speech intelligibility functions for the streaming compared to the reference condition was also found for this subgroup (T(11)=–2.835, p<0.01, N=12) and averaged 3.6 %/dB.

Adaptive categorical listening effort scaling

In addition to asking about listening effort within the questionnaires and to obtain listening effort ratings (ESCUs) as a function of SNR, participants completed ACALES [22]. As with OLSAf in noise, speech was presented via the smartphone and noise was presented via speakers from the front at a level of 65 dB SPL. The speech level was also identical to that for OLSAf (i.e., adjusted to a comfortable level when listening to the audio book, see above).

Full ACALES data were obtained for 10 out of 20 participants. ACALES was very demanding for most participants, especially in the reference condition, and was partly discontinued because the maximum i-SNRs were reached at too high ESCU (i.e., ACALES refrains from further increasing the i-SNR in such cases). The i-SNRs for the different assessment categories (ESCUs) differed significantly (two-way RM-ANOVA, FESCU(6,54)=78.976, p<0.001). The resulting averaged ACALES data showed a significant decrease in listening effort with streaming compared to the reference (FT(1,54)=6.068, p<0.05, see Figure 4 [Fig. 4]). Figure 4 [Fig. 4] shows that especially in the positive SNR range relevant to everyday life, the streaming condition showed a large benefit, which averaged about 6 dB i-SNR for ESCU 3. That is, the ambient noise in the streaming condition can be on average 6 dB louder compared to the situation without streaming to lead to a hearing effort of ‘very little effort’.

However, ACALES showed strong individual differences. For some participants, there was no decrease in listening effort with streaming, but rather no effect or even the opposite effect. For ESCU 7 (moderate effort), 7/10 showed an improvement of ≥4 dB i-SNR, 1/10 showed no improvement and 2/10 showed a deterioration of up to 7 dB i-SNR. At ESCU 1 (effortless), the same 7/10 showed an improvement of ≥5 dB i-SNR, the person with no improvement at ESCU 7 showed a slight improvement of just under 1 dB i-SNR and the remaining 2/10 again showed a deterioration of up to 7 dB i-SNR.

Usability
System Usability Scale (SUS, Q5)

The System Usability Scale according to Brooke [7] was evaluated by the participants to get an impression of the usability of telephoning with streaming. Among other things, participants were asked whether they would use the new system frequently or whether they found it cumbersome. Individual scores were used to obtain an overall mean SUS score. This was 85.3 points, indicating that the usability of smartphone telephony with direct streaming could be classified as 'excellent' according to Bangor et al. [5].

Usability compared to the reference (SUS_comp, Q6)

To access the usability of streaming telephony compared to the reference telephony, a questionnaire was used to compare the two in terms of the complexity of the system, the ease of use, the usability without professional help, the set of system features and the intuitiveness of use. These are similar aspects of usability as were asked in the previous questionnaire Q5. The resulting ratings of 19 of the 20 participants are shown in Figure 5 [Fig. 5]. One person was not able to rate streaming with the hearing aids after not being able to achieve sufficient speech understanding after switching to the test devices. Regarding all aspects, the participants prefer direct streaming to reference in terms of usability.


Discussion

Reference condition

The questionnaire on reference telephony in the present study revealed that 75% of the participants mainly use landline telephones and only 25% use mobile phones. In general, CI users seem to find telephoning with a mobile phone more challenging than telephoning with a landline phone. In a questionnaire study by Anderson et al. [2], 71% of CI users felt able to use landline phones to some extent and only 54% used mobile phones (multiple responses). According to the study by Anderson et al. [2], talking about familiar topics with family members was the easiest condition and recognising voices was also easier over landlines. Although they made little to no calls on mobile phones in their daily lives, most of the participants in the present study were well able to hold a conversation on a mobile phone, at least in a quiet situation (median score of at least 5 (a lot) on the rating scale from nothing (1) to everything (7) regarding speech understanding).

The reference condition consisted of very heterogeneous, individual solutions, so that the difference values reference/streaming condition scattered more than would have been expected in a laboratory-induced reference condition. The reference condition was deliberately chosen to be diverse, as this allowed the aspect of ecological validity to be considered. Anecdotally, it is reported time and again that persons with profound hearing loss bordering on deafness sometimes use very imaginative 'work arounds' for telephony in everyday life, because often no easy-to-use telephone systems are available for the user. Overall, the data show that the MFi technology used in this test has obviously led to a significant improvement and that it is therefore to be expected that this or a comparable technology will enable the troublesome ‘work arounds’ to be avoided.

Sound quality

Sound quality was perceived to be similar in both conditions. However, using contrast pairs it was found that the voices of the interlocutors were perceived as clearer and closer for the direct streaming than for the reference. The latter may perhaps be because the reference telephony was predominantly made through the microphone of only one of the hearing systems, whereas the streaming telephony is made to both hearing systems in any case. The use of both ears may possibly shift the sound of voices to a more central and thus closer perception. Balfour and Hawkins [4] investigated the hearing of monaural and bilateral hearing aid users with symmetrical mild or moderate hearing loss using sound quality dimensions (brightness, clarity, volume, loudness, proximity, overall impression, softness, and spaciousness). Their results showed a clear bilateral preference for all eight sound quality dimensions regardless of listening environment. Bilateral preferences were strongest for overall impression, volume, and spaciousness. In a study with bimodal participants [9], the presented bimodal listening situation was also perceived as more voluminous, less tinny, and less unpleasant than CI alone. Differences in the evaluation between the reference and streaming condition for some of the contrast pairs in the present study may thus well be due to a different perception caused by the signal presentation, which is mostly unilateral in the reference and bilateral in the streaming condition.

Usability

The usability for the streaming condition is better than the reference and is rated as ‘excellent’. This is remarkable as most participants have never used streaming or Bluetooth in the context of telephony before. Only the ‘ease of use without professional help’ was rated slightly worse, i.e., not all participants would trust themselves to use the system without help. A possible reason for this is the insecurity of some of the participants in dealing with smartphones, e.g., in view of the fact, that the smartphone and the hearing systems must first be paired. In general, this result gives hope that streaming while talking on the phone can achieve good acceptance among bimodal patients with appropriate instruction by clinical audiologists.

Speech intelligibility

The well-known difficulties in studies with bimodal fitting, that the presentation level on the CI side cannot be measured acoustically, led in the present study to the fact that the speech presentation level at or in the ear is unknown for both the reference and the streaming condition. Level data for the speech signal are therefore not absolute (see definition of i-SNR under Procedure and Setup) and must be interpreted with caution. However, by adjusting the speech presentation level for both conditions, the signals were set to be equally loud (at a comfortable level) at least in relation to each other, so that SNR values also became interpretable in relation to each other. In general, the loudness category “pleasant” seems to be quite robust and shows only a low intraindividual dispersion in studies [16].

Streaming the speech signals into both hearing systems only slightly increased subjective speech intelligibility in the quiet laboratory situation, as the volume of the smartphone was adjusted (turned down) in advance by the participants. Possible effects due to better coupling during streaming were thus compensated for. If this adjustment had not been made, the difference in subjective speech intelligibility would probably have been more pronounced. However, the speech intelligibility tests showed that for a telephone call in a noisy situation, an improvement in speech intelligibility was induced. The speech intelligibility threshold in quiet is significantly influenced by the absolute loudness of the speech signals, while thresholds in noise are influenced by the signal-to-noise ratio and are relatively independent of the absolute loudness over a wide range as long as the signals can be heard well [33]. The signal-to-noise ratio can be altered by changing the volume of either the speech signal or the noise. Although the volume of the smartphones was chosen differently, the preceding adaptive speech reception thresholds in quiet for the two conditions (i.e., reference and streaming) indicate similar speech presentation levels, as comfortable in both conditions. Therefore, despite the individual adjustment of the speech presentation level by the participants based on the audio book presentation, it can be assumed that the better speech reception thresholds in noise did not result from a significantly higher speech presentation level. Rather, this shows the influence of streaming on the noise. External background noise can be better attenuated and/or also faded out during telephone calls with streamed speech signals. The N6 processor, which most of the participants wore at the reference appointment, has similar signal pre-processing to the N7 processor, which is primarily intended to improve speech intelligibility in noise [36]. Warren et al. [36] showed that standard speech intelligibility in quiet and noise (i.e., when speech and, if necessary, noise are presented over loudspeakers) was not significantly different for either signal processor, while direct streaming of the signals also provided an improvement there compared to acoustic telephony.

The results of the present study are consistent with findings of Marcrum et al. [25], in whose work streaming significantly improved sentence recognition and reduced hearing difficulties compared to telecoil or acoustic coupling configurations. The results of a study by Wolfe et al. [38] also indicate that the use of wireless assistive listening technology improves speech intelligibility (words in this case) via mobile phone in quiet and noise compared to performance with acoustic mobile phone in a group of adult CI users. They attributed the improvement in speech intelligibility in noise to, among other things, an improved signal-to-noise ratio due to attenuation of the signal transmitted acoustically through the sound processor microphones and affected by the noise, as well as the presumably more robust signal during streaming compared to acoustic transmission. This also applies to the present study, in which the default setting for streaming was used. In this case, the streamed signals from the smartphone were presented with the signals from the microphones on the speech processor in a mixing ratio of 2:1. As a consequence of the adjusted speech presentation level, this only resulted in a 6 dB attenuated noise level in the streaming condition. The sound of the streamed audio book voice was rated as clearer in the median, indicating a reduction of distortion (by bypassing the smartphone speakers).

The bilateral presentation of the signals in the bimodal streaming condition may also have contributed. In the reference condition of the present study, phone calls were predominantly made with one ear (the smartphone was held to one of the two ears or speech processors). Monaural versus binaural SRTs in noise can differ by about 2 dB SNR in favour of the binaural situation in sentence tests such as the OLSA in normal-hearing participants (unpublished data). When comparing unilateral versus bilateral CI users, Laback et al. [23] found a small but non-significant improvement of about 0.5 dB with the OLSA (S0N0 condition). Other studies also indicate that speech understanding in noise improves with bilateral presentation or fitting compared to unilateral presentation [3], [13], [14], [26]. In the case of bimodal fitting, hearing is provided with two different modalities that deliver complementary information to a certain extent – for example, in the frequency range. This corresponds to the fact that the sound of the audio book voice in this study tended to be perceived as more pleasant and voluminous in the streaming condition than in the reference condition. Hoppe et al. [19] found between 0.8 and 1.8 dB improvement in SRTs for patients in the bimodal condition relative to the better ear fitted, depending on the underlying hearing loss on the hearing aid side.

Nevertheless, there also seem to be opposite effects (i.e., the monaural result is better than the binaural result), the so-called binaural interference [21]. Based on the mixing ratio of the environment (noise) and the streamed speech signal, an improvement of the SRT by at least 6 dB would theoretically be possible, but in the present study it is only about 4 dB on average. The fact that the improvement in the signal-to-noise ratio with a bimodal fitting is not necessarily found linearly in the average speech intelligibility could be partly attributed to effects that interfere with the binaural integration of the information of both modalities, especially since in the present study the two different hearing systems were not further fine-tuned (to each other) during the fitting. In addition to the differences in the frequency range, differences in the processing latencies between the two sides (up to 9 ms [39]), for example, cannot be ruled out. The participants also had little time to acclimatise to the new systems. In about 40% of the cases, this may have led to the fact that especially participants with steep speech intelligibility functions and relatively good SRTs for the reference condition did not benefit from the binaural situation, but on the contrary deteriorated.

As already emphasised by Dietz et al. [11], there is a tendency to underestimate the hearing benefit based on the change in SRT value if no statements can also be made about the slope of the speech intelligibility curve. Therefore, to interpret the participants’ improvement, it is helpful to additionally use the slope of the speech intelligibility function. Various studies with CI users observed a tendency towards flatter speech intelligibility functions in patients with higher SRTs [11], [17], [28]. According to Dietz et al. [11], this should be taken into account when considering the clinical improvement of patients’ fluency. The present SRT measurements with simultaneous estimation of the slope showed that the streaming condition also led to significantly steeper slopes of the speech intelligibility curves. In this adaptive procedure, convergence is performed in parallel, once to 20% and once to 80% speech intelligibility [6]. The threshold for 80% intelligibility is reached at a lower SNR than in the reference condition, while the 20% threshold improves slightly less, thus the function “tilts” and becomes steeper. In addition, this gives an indication that the spread in speech intelligibility between different participants in the range of good speech understanding becomes smaller due to the possibility of streaming to both ears. Thus, they become more similar in their ability to communicate via mobile phone due to streaming technology.

Contrary to the conclusion of Hey and colleagues [17], we did not observe that low SRT systematically causes adaptive measures not to converge well, at least not according to the procedures used. However, SRTs obtained in the range of a shallow slope have a lower test-retest reliability than SRTs obtained in the range of a steep slope [17] and must therefore be viewed critically. In our study, after excluding participants with flat speech intelligibility functions, we still saw an average of about 4 dB improvement in speech intelligibility with streaming, but this difference was not significant anymore.

Listening effort

It could be shown in this study that there were statistically demonstrable improvements with streaming for both the subjective ratings via the questionnaire and the adaptive procedure ACALES. These effects were somewhat weaker for the questionnaire procedures, since it can be assumed that this procedure is less sensitive, as it has to be averaged over longer periods of time (retrospective or cumulative bias) and the conditions could be varied less. The adaptive procedure ACALES thus showed greater differences. These are mainly found in the range of positive i-SNRs. As the i-SNR decreases, the difference between reference and streaming decreases, which can also be attributed to a reduced audibility of the (telephone) speech signal.

During development of the ACALES method [22], users were asked what exactly they understood by listening effort. In contrast to physiological and cognitive processes, where the underlying mechanisms are more implicit, users described that the evaluation of different SNR conditions is about pushing away or blocking out irrelevant information, thus not so much a percept, but a later conscious, explicit evaluation of mental effort. If such auditory mediated efforts occur more frequently, the result is fatigue (cf. McGarrigle et al. [27]). The results shown here indicate that mental effort could be significantly reduced by streaming technology, especially in the range of a positive SNR relevant to everyday life. These differences were not purely statistical significance measures: At 0 dB i-SNR, listening effort was reduced by an average of three scale points (of thirteen) from ‘clearly effortful’ in the reference condition to ‘little/moderately effortful’ in the streaming condition, so that on the user side, clinical relevance can also be assumed. Following McGarrigle et al. [27], it can be assumed that acute, strenuous telephone situations can cumulatively manifest themselves into a fatigue reaction over the course of (professional) everyday life and that the MFi technology used here can lead to reduced fatigue. However, the scientific evidence in this regard is still unsatisfactory [31] and requires further research efforts.

A study by Winneke et al. [37] showed that the results of a statically performed ACALES procedure, which revealed reduced listening effort for one of the investigated microphone directionalities of hearing aids, were accompanied by reduced alpha-band activity of EEG measurements (9–12 Hz) for the same microphone directionality. In this study, too, it can be assumed that the subjectively recorded listening effort is associated with physiological correlates and that the frequent use of streaming technology may reduce after-effects such as fatigue due to the physiological stress and adaptation of the hearing system during stimulus processing. Fatigue-related impairments can affect, for example, cognitive processing abilities (e.g., attention, processing speed, memory) [20]. In future studies, physiological measurements should also be carried out in parallel, e.g., in people who use the telephone intensively in their daily work, to corroborate the evidence of mental stress and its after-effects in the sense of counteracting it by mobilising cognitive resources.

The aspect of usability is particularly important from a health policy perspective. According to the ICF taxonomy and the considerations of Granberg et al. [15], the establishment of functional ability with regard to participation, as in conversations with one or more persons, is essential in the provision of hearing systems and their accessories, as realised with MFi technology. In the ICF taxonomy, participation, in this case conversation with one person, is an essential outcome variable and is moderated by so-called context factors. One contextual factor is the provision of hearing systems and accessories, and if these technologies are easy to use, i.e., the barriers are low, the willingness to use them increases, which may make the participation processes possible or more likely. For the occupational context (the mean age of this sample was about 50 years), not only is successful participation essential, but also the avoidance of mental hazards, which are now also receiving more regulatory attention in Germany (“risk assessment of mental stress”).


Conclusions

Direct bilateral streaming can be a relevant help for bimodal listeners when using a smartphone. This is particularly evident with regard to speech intelligibility in noise and listening effort, while sound quality does not deteriorate compared to previously used alternatives in the study population. The usability of telephoning with direct (bilateral) streaming is rated high and seems to be better than most alternatives used by bimodal listeners so far.

Streaming helps bimodal listeners to make phone calls with their smartphones and has the potential to increase their ability to participate in their private and professional lives. By reducing mental stress and after-effects while improving communication skills, it can be assumed that the technology tested here in communication-intensive professions is also relevant in health economic terms for payers and companies.


Notes

Conflicts of interest

The authors declare that they have no conflicts of interest related to this article.

The authors Hessel H and Meis M are employees of Cochlear Deutschland GmbH & Co. KG., Meis M since 08/2022.

Funding

This work was supported by funding from Cochlear Deutschland GmbH & Co. KG.

Acknowledgements

Many thanks to the patients and the team of the University Department of Otorhinolaryngology at the Evangelisches Krankenhaus Oldenburg, Carl von Ossietzky University Oldenburg!


References

1.
Ahrlich M. Optimierung und Evaluation des Oldenburger Satztests mit weiblicher Sprecherin und Untersuchung des Effekts des Sprechers auf die Sprachverständlichkeit [Bachelorarbeit]. Oldenburg: Carl von Ossietzky Universität Oldenburg; 2013.
2.
Anderson I, Baumgartner WD, Böheim K, Nahler A, Arnoldner C, Arnolder C, D’Haese P. Telephone use: what benefit do cochlear implant users receive? Int J Audiol. 2006 Aug;45(8):446-53. DOI: 10.1080/14992020600690969 External link
3.
Asp F, Mäki-Torkko E, Karltorp E, Harder H, Hergils L, Eskilsson G, Stenfelt S. Bilateral versus unilateral cochlear implants in children: speech recognition, sound localization, and parental reports. Int J Audiol. 2012 Nov;51(11):817-32. DOI: 10.3109/14992027.2012.705898 External link
4.
Balfour PB, Hawkins DB. A comparison of sound quality judgments for monaural and binaural hearing aid processed stimuli. Ear Hear. 1992 Oct;13(5):331-9. DOI: 10.1097/00003446-199210000-00010 External link
5.
Bangor A, Kortum P, Miller J. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J Usability Stud. 2009;4:114-23.
6.
Brand T, Kollmeier B. Efficient adaptive procedures for threshold and concurrent slope estimates for psychophysics and speech intelligibility tests. J Acoust Soc Am. 2002 Jun;111(6):2801-10. DOI: 10.1121/1.1479152 External link
7.
Brooke J. SUS – a quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland AL, Hrsg. Usability Evaluation in Industry. London: Taylor and Francis; 1996.
8.
Clinkard D, Shipp D, Friesen LM, Stewart S, Ostroff J, Chen JM, Nedzelski JM, Lin VY. Telephone use and the factors influencing it among cochlear implant patients. Cochlear Implants Int. 2011 Aug;12(3):140-6. DOI: 10.1179/146701011X12998393351321 External link
9.
Devocht EMJ, Janssen AML, Chalupper J, Stokroos RJ, George ELJ. The Benefits of Bimodal Aiding on Extended Dimensions of Speech Perception: Intelligibility, Listening Effort, and Sound Quality. Trends Hear. 2017 Jan-Dec;21. DOI: 10.1177/2331216517727900 External link
10.
DGHNO. Weißbuch – Cochlea-Implantat (CI)-Versorgung Empfehlungen. 2018.
11.
Dietz A, Buschermöhle M, Sivonen V, Willberg T, Aarnisalo AA, Lenarz T, Kollmeier B. Characteristics and international comparability of the Finnish matrix sentence test in cochlear implant recipients. Int J Audiol. 2015;54 Suppl 2:80-7. DOI: 10.3109/14992027.2015.1070309 External link
12.
Domingo Y, Holmes E, Johnsrude IS. The benefit to speech intelligibility of hearing a familiar voice. J Exp Psychol Appl. 2020 Jun;26(2):236-47. DOI: 10.1037/xap0000247 External link
13.
Feuerstein JF. Monaural versus binaural hearing: ease of listening, word recognition, and attentional effort. Ear Hear. 1992 Apr;13(2):80-6.
14.
Freyaldenhoven MC, Plyler PN, Thelin JW, Burchfield SB. Acceptance of noise with monaural and binaural amplification. J Am Acad Audiol. 2006 Oct;17(9):659-66. DOI: 10.3766/jaaa.17.9.5 External link
15.
Granberg S, Möller K, Skagerstrand A, Möller C, Danermark B. The ICF Core Sets for hearing loss: researcher perspective, Part II: Linking outcome measures to the International Classification of Functioning, Disability and Health (ICF). Int J Audiol. 2014 Feb;53(2):77-87. DOI: 10.3109/14992027.2013.858279 External link
16.
Hawley ML, Sherlock LP, Formby C. Intra- and Intersubject Variability in Audiometric Measures and Loudness Judgments in Older Listeners with Normal Hearing. Semin Hear. 2017 Feb;38(1):3-25. DOI: 10.1055/s-0037-1598063 External link
17.
Hey M, Hocke T, Hedderich J, Müller-Deile J. Investigation of a matrix sentence test in noise: reproducibility and discrimination function in cochlear implant patients. Int J Audiol. 2014;53(12):895–902. DOI: 10.3109/14992027.2014.938368 External link
18.
Holmes E, Domingo Y, Johnsrude IS. Familiar Voices Are More Intelligible, Even if They Are Not Recognized as Familiar. Psychol Sci. 2018 10;29(10):1575-83. DOI: 10.1177/0956797618779083 External link
19.
Hoppe U, Hocke T, Digeser F. Bimodal benefit for cochlear implant listeners with different grades of hearing loss in the opposite ear. Acta Otolaryngol. 2018 Aug;138(8):713-21. DOI: 10.1080/00016489.2018.1444281 External link
20.
Hornsby BW, Naylor G, Bess FH. A Taxonomy of Fatigue Concepts and Their Relation to Hearing Loss. Ear Hear. 2016 Jul-Aug;37 Suppl 1:136S-44S. DOI: 10.1097/AUD.0000000000000289 External link
21.
Jerger J, Silman S, Silverman C, Emmer M. Binaural Interference: Quo Vadis? J Am Acad Audiol. 2017 Apr;28(4):266-70. DOI: 10.3766/jaaa.28.4.1 External link
22.
Krueger M, Schulte M, Brand T, Holube I. Development of an adaptive scaling method for subjective listening effort. J Acoust Soc Am. 2017 06;141(6):4680. DOI: 10.1121/1.4986938 External link
23.
Laback B, Pok SM, Schmid K, Deutsch WA, Baumgartner WD. Efficiency of binaural cues in a bilateral cochlear implant listener. 2002. Available from: http://www.sea-acustica.es/index.php?id=301 External link
24.
Lenarz T. Cochlear implant - state of the art. GMS Curr Top Otorhinolaryngol Head Neck Surg. 2018 Feb 19;16:Doc04. DOI: 10.3205/cto000143 External link
25.
Marcrum SC, Picou EM, Steffens T. Avoiding disconnection: An evaluation of telephone options for cochlear implant users. Int J Audiol. 2017 03;56(3):186-93. DOI: 10.1080/14992027.2016.1247502 External link
26.
McArdle RA, Killion M, Mennite MA, Chisolm TH. Are two ears not better than one? J Am Acad Audiol. 2012 Mar;23(3):171-81. DOI: 10.3766/jaaa.23.3.4 External link
27.
McGarrigle R, Munro KJ, Dawes P, Stewart AJ, Moore DR, Barry JG, Amitay S. Listening effort and fatigue: what exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group 'white paper'. Int J Audiol. 2014 Jul;53(7):433-40. DOI: 10.3109/14992027.2014.890296 External link
28.
Müller-Deile J. Sprachverständlichkeitsuntersuchungen bei Kochleaimplantatpatienten [Speech intelligibility tests in cochlear implant patients]. HNO. 2009 Jun;57(6):580-92. DOI: 10.1007/s00106-009-1930-3 External link
29.
Picou EM, Ricketts TA. Comparison of wireless and acoustic hearing aid-based telephone listening strategies. Ear Hear. 2011 Mar-Apr;32(2):209-20. DOI: 10.1097/AUD.0b013e3181f53737 External link
30.
Picou EM, Ricketts TA. Efficacy of hearing-aid based telephone strategies for listeners with moderate-to-severe hearing loss. J Am Acad Audiol. 2013 Jan;24(1):59-70. DOI: 10.3766/jaaa.24.1.7 External link
31.
Schulte M, Heeren J, Mirkovic B, Meis M, Latzel M. Helfen Hörgeräte die Hörermüdung zu reduzieren? In: Deutsche Gesellschaft für Audiologie e.V., Hrsg. 23. Jahrestagung der Deutschen Gesellschaft für Audiologie. Köln, 03.-04.09.2020. Düsseldorf: German Medical Science GMS Publishing House; 2020. Doc147. DOI: 10.3205/20dga147 External link
32.
Sousa AF, Couto MIV, Martinho-Carvalho AC. Quality of life and cochlear implant: results in adults with postlingual hearing loss. Braz J Otorhinolaryngol. 2018 Jul - Aug;84(4):494-9. DOI: 10.1016/j.bjorl.2017.06.005 External link
33.
Wagener KC. Factors influencing sentence intelligibility in noise [Dissertation]. Oldenburg: Carl von Ossietzky Universität Oldenburg, bis-Verlag; 2004
34.
Wagener KC, Brand T, Kollmeier B. Entwicklung und Evaluation eines Satztests für die deutsche Sprache III: Evaluation des Oldenburger Satztests. Z Audiol. 1999;38(3):86-95.
35.
Wagener KC, Hochmuth S, Ahrlich M, Zokoll MA, Kollmeier K. Der weibliche Oldenburger Satztest. In: Deutsche Gesellschaft für Audiologie e. V., Hrsg. Abstracts der 17. Jahrestagung der Deutschen Gesellschaft für Audiologie [CD-Rom]; 2014. ISBN: 978-3-939296-06-5
36.
Warren CD, Nel E, Boyd PJ. Controlled comparative clinical trial of hearing benefit outcomes for users of the Cochlear™ Nucleus® 7 Sound Processor with mobile connectivity, Cochlear Implants International. Cochlear Implants Int. 2019 05;20(3):116-26. DOI: 10.1080/14670100.2019.1572984 External link
37.
Winneke AH, Schulte M, Vormann M, Latzel M. Effect of Directional Microphone Technology in Hearing Aids on Neural Correlates of Listening and Memory Effort: An Electroencephalographic Study. Trends Hear. 2020 Jan-Dec;24. DOI: 10.1177/2331216520948410 External link
38.
Wolfe J, Morais Duke M, Schafer E, Cire G, Menapace C, O’Neill L. Evaluation of a wireless audio streaming accessory to improve mobile telephone performance of cochlear implant users. Int J Audiol. 2016;55(2):75-82. DOI: 10.3109/14992027.2015.1095359 External link
39.
Zirn S, Angermeier J, Arndt S, Aschendorff A, Wesarg T. Reducing the Device Delay Mismatch Can Improve Sound Localization in Bimodal Cochlear Implant/Hearing-Aid Users. Trends Hear. 2019 Jan-Dec;23. DOI: 10.1177/2331216519843876 External link