gms | German Medical Science

Artificial Vision 2024

The International Symposium on Visual Prosthetics

05. - 06.12.2024, Aachen, Germany

Eye movements support memory-guided search with peripheral scotoma simulation in virtual reality

Meeting Abstract

Suche in Medline nach

  • Nico Marek - Department of Psychology
  • S. Pollmann - Department of Psychology; Center for Brain and Behavioral Sciences, Otto-von-Guericke Universität Magdeburg, Magdeburg, Germany

Artificial Vision 2024. Aachen, 05.-06.12.2024. Düsseldorf: German Medical Science GMS Publishing House; 2025. Doc24artvis50

doi: 10.3205/24artvis50, urn:nbn:de:0183-24artvis502

Veröffentlicht: 9. Mai 2025

© 2025 Marek et al.
Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung). Lizenz-Angaben siehe http://creativecommons.org/licenses/by/4.0/.


Gliederung

Text

Objective: Retinal implants can impact visual search by either supporting (using photosensors implanted in the eye, e.g., Alpha-IMS) or preventing eye movements (using external-cameras, e.g., Argus II) for visual search. Eye movements are known to support memory for faces or scenes. Here, we ask if they also support memory-guided visual search. Using a virtual reality variant of the contextual cueing paradigm, we asked if repeated presentation of a visual configuration results in a search time advantage compared to search in a newly generated configuration (Chun & Jiang, 1998). Specifically, we investigated whether this search time benefit due to contextual cueing would be observed when vision is restricted to a central field of view - analogous to RI-vision - that can be shifted by either eye or head movements or only by head movements alone. We expected that the ability to use eye movements would be essential for memory guidance due to contextual cueing.

Materials and Methods: Forty-two participants were randomly assigned to either the “static-” or “gaze-contingent” condition. The gaze-contingent peripheral scotoma overlay was centered on fixation and continuously updated by a binocular eye tracker installed in the VR-goggles, while in the static condition, it remained fixed to the center, so that only head (or body) movements enabled the participants to search the display.

Results: Rm-ANOVAs were performed for both groups to investigate mean reaction times. While the “static” group showed no significant main effect of contextual cueing, the “gaze-contingent” group did, due to a search time advantage for repeated display.

Discussion: Contextual cueing relied on the ability to make eye movements in order to search the environment. This implies that the inability to use eye movements to search the environment, as imposed by RI-systems using an external camera, prevents the use of incidentally learned spatial configurations for memory-guided search.