gms | German Medical Science

25. Jahrestagung der Deutschen Gesellschaft für Audiologie

Deutsche Gesellschaft für Audiologie e. V.

01.03. - 03.03.2023, Köln

Distributed interactive low-delay virtual reality for music and research applications

Meeting Abstract

Suche in Medline nach

  • presenting/speaker Giso Grimm - Universität Oldenburg, Oldenburg, DE
  • Angelika Kothe - Universität Oldenburg, Oldenburg, DE
  • Volker Hohmann - Universität Oldenburg, Oldenburg, DE

Deutsche Gesellschaft für Audiologie e.V.. 25. Jahrestagung der Deutschen Gesellschaft für Audiologie. Köln, 01.-03.03.2023. Düsseldorf: German Medical Science GMS Publishing House; 2023. Doc015

doi: 10.3205/23dga015, urn:nbn:de:0183-23dga0156

Veröffentlicht: 1. März 2023

© 2023 Grimm et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

In this talk we present a system that enables distributed virtual acoustics for interactive communication. In this context, communication situations with alternating roles require a very low delay in sound transmission, which is particularly critical for musical communication, where the upper limit of tolerable delay is between 30 and 50 ms, depending on the genre. Our system can achieve latencies between 20 ms (local network) and 100 ms (intercontinental connection), depending on the network connection, with typical values of 30–40 ms. This is far below the delay achieved by typical video-conferencing tools (100–500 ms) and is sufficient for seamless speech communication, for local or continental connections also for music applications. The system’s virtual acoustics engine is based on a physical sound propagation model [1]. Binaural signals are generated via a parametric HRTF model [2], data-driven techniques were used to optimize the HRTF for human perception. Head tracking can be used both locally for binaural rendering and at the remote acoustic models for head orientation dependent directionality of remote interlocutors. For behavioral analysis, other sensor data, such as EOG or EEG, can be sent to a central data logging system in addition to head motion data. In addition to a technical description of the system, we show here example data for a distributed measurement of head motion behavior in speech communication.


References

1.
Grimm G, Luberadzka J, Hohmann V. A toolbox for rendering virtual acoustic environments in the context of audiology. Acta Acustica United with Acustica. 2019;105(3):566–78. DOI: 10.3813/AAA.919337 Externer Link
2.
Schwark F, Schädler MR, Grimm G. Data-driven optimization of parametric filters for simulating head-related transfer functions in real-time rendering systems. EUROREGIO BNAM2022. 2022:1–10.