gms | German Medical Science

24. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

14.09. - 17.09.2022, Erfurt

Audio-visual speech processing task to assess speech understanding on a cortical level

Meeting Abstract

  • presenting/speaker András Bálint - Inselspital, University of Bern, Bern, CH
  • Wilhelm Wimmer - Inselspital, University of Bern, Bern, CH
  • Marco Caversaccio - Inselspital, University of Bern, Bern, CH
  • Stefan Weder - Inselspital University Hospital Bern, Bern, CH

Deutsche Gesellschaft für Audiologie e.V.. 24. Jahrestagung der Deutschen Gesellschaft für Audiologie. Erfurt, 14.-17.09.2022. Düsseldorf: German Medical Science GMS Publishing House; 2022. Doc096

doi: 10.3205/22dga096, urn:nbn:de:0183-22dga0969

Published: September 12, 2022

© 2022 Bálint et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Summary: We present the feasibility of an Audio-Visual Speech Processing Task to objectively evaluate speech understanding using functional neuroimaging in normal hearing subjects.

Introduction: Speech comprehension is determined by the highly specific organization of the auditory cortex. Through stimulus evoked cortical activations measured by functional near-infrared spectroscopy (fNIRS), we can study underlying auditory networks. To measure such activations, we need a well-defined testing protocol. Our aim is to set-up an audio-visual task which enable us to measure functional brain activity related to speech understanding in temporal and occipital cortical regions using fNIRS. The task should be

1.
deducible from clinically established tests,
2.
induce maximal cortical activation,
3.
cover all relevant cortical areas and
4.
be time-efficient and reproducible. In addition, the task should be suitable for both normal-hearing individuals and those with cochlear implants (CI).

Methods: We recruited 10 normal hearing participants to evaluate our protocol. The protocol consists of a resting state (5 minutes) and two stimulation periods (2x12 minutes). During the stimulation period, we present 13 seconds long video-recordings of the Oldenburg Sentence Test (OLSA) [1]. The stimulations are presented in different modalities: speech-alone, speech in noise, visual-alone (i.e., lip reading) or audio-visual. Each stimulation type is repeated 10 times in a counter-balanced block design. At random time points interactive questions are asked about the content. After the measurement, a 3D scan is performed to digitize the covered anatomical locations.

Results: Our proposed protocol was successfully tested in 10 normal hearing subjects. As test material, we presented OLSA sentences, which are widely used clinically. During the stimulation periods, we were able to measure activation patterns temporally and occipitally. After visual stimulation, we observed an increase in the oxygenated haemoglobin (HbO) concentration in the visual cortex, but no increase in the auditory cortex. Contrary, after auditory stimulation we measured activation temporally, with only baseline activity occipitally. Following the audio-visual condition, cortical activation in both regions was observable. Overall, the auditory responses are not as prominent as the visual responses, which is due to the underlying anatomy and observed response variability [2]. The optode positions were selected to enable measurements in CI users as well.

Discussion: Feasibility of an OLSA-based Audio-Visual Speech Processing Task has been demonstrated by measuring functional brain activity in normal hearing subjects. The next steps involve solidification of responses on additional normal hearing subjects, which are going to serve as baseline responses in the group level comparison against cochlear implanted patients.


References

1.
Llorach G, Kirschner F, Grimm G, Zokoll MA, Wagener KC, Hohmann V. Development and evaluation of video recordings for the OLSA matrix sentence test. Int J Audiol. 2022 Apr;61(4):311-321. DOI: 10.1080/14992027.2021.1930205 External link
2.
Wiggins IM, Anderson CA, Kitterick PT, Hartley DE. Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable? Hear Res. 2016 Sep;339:142-54. DOI: 10.1016/j.heares.2016.07.007 External link