Artikel
Distributed interactive low-delay virtual reality for music and research applications
Suche in Medline nach
Autoren
Veröffentlicht: | 1. März 2023 |
---|
Gliederung
Text
In this talk we present a system that enables distributed virtual acoustics for interactive communication. In this context, communication situations with alternating roles require a very low delay in sound transmission, which is particularly critical for musical communication, where the upper limit of tolerable delay is between 30 and 50 ms, depending on the genre. Our system can achieve latencies between 20 ms (local network) and 100 ms (intercontinental connection), depending on the network connection, with typical values of 30–40 ms. This is far below the delay achieved by typical video-conferencing tools (100–500 ms) and is sufficient for seamless speech communication, for local or continental connections also for music applications. The system’s virtual acoustics engine is based on a physical sound propagation model [1]. Binaural signals are generated via a parametric HRTF model [2], data-driven techniques were used to optimize the HRTF for human perception. Head tracking can be used both locally for binaural rendering and at the remote acoustic models for head orientation dependent directionality of remote interlocutors. For behavioral analysis, other sensor data, such as EOG or EEG, can be sent to a central data logging system in addition to head motion data. In addition to a technical description of the system, we show here example data for a distributed measurement of head motion behavior in speech communication.
References
- 1.
- Grimm G, Luberadzka J, Hohmann V. A toolbox for rendering virtual acoustic environments in the context of audiology. Acta Acustica United with Acustica. 2019;105(3):566–78. DOI: 10.3813/AAA.919337
- 2.
- Schwark F, Schädler MR, Grimm G. Data-driven optimization of parametric filters for simulating head-related transfer functions in real-time rendering systems. EUROREGIO BNAM2022. 2022:1–10.