gms | German Medical Science

Artificial Vision — The 2nd Bonn Dialogue. The International Symposium on Visual Prosthesis

Retina Implant Foundation

19.09.2009, Bonn

Why isn't prosthetic vision like biological vision? Results from simulation studies

Meeting Abstract

Suche in Medline nach

Artificial Vision – The 2nd Bonn Dialogue. The International Symposium on Visual Prosthesis. Bonn, 19.-19.09.2009. Düsseldorf: German Medical Science GMS Publishing House; 2009. Doc09ri07

DOI: 10.3205/09ri07, URN: urn:nbn:de:0183-09ri074

Veröffentlicht: 30. November 2009

© 2009 Dagnelie.
Dieser Artikel ist ein Open Access-Artikel und steht unter den Creative Commons Lizenzbedingungen (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.de). Er darf vervielfältigt, verbreitet und öffentlich zugänglich gemacht werden, vorausgesetzt dass Autor und Quelle genannt werden.


Gliederung

Text

Background: As several groups have embarked on patient testing with epiretinal and subretinal implants over the last few years, the results have been at the same tie encouraging and disappointing. Clearly the patients are “seeing” yet the vision they describe and utilize does not match the simple simulations we envisaged a few years ago. Micro-anatomical studies have taught us the fundamental changes in signal processing in the degenerating retina, and we can begin to understand the patients’ problems by adjusting our simulations, to understand what prosthesis wearers may see, and how much they can learn to understand through practice.

Objective: To adapt the prosthetic vision simulation environment, specifying the appearance of individual phosphenes and their temporal and spatial interactions, so they model the percepts reported by, and data collected from, prosthesis wearers; and to test this phosphene-based vision in sighted volunteers.

Methods: We have created a modeling environment that allows us to build in: 1. configuration information regarding different implants, which provide the spatial and temporal electric field properties at the tissue interface; 2. spontaneous “noise” backgrounds corresponding to the “light shows” described by many RP patients; 3. eye movement compensation so the “phosphenes” can be stabilized on the subject’s retina; and 4. patient data from intraoperative experiments and early implant wearers, in particular threshold and dynamic range data. In this environment our subjects perform a series of recognition, localization, discrimination, and eye-hand coordination tasks.

Results: Over the past 3 years we have concentrated on visually guided task performance: mobility in a virtual building, a checkers game, and a maze tracing experiment. In each of these experiments subjects were trained to understand the impoverished reality of their “phosphene world” and to perform tasks never shown to them in unfiltered, free-viewing conditions. All subjects tested have been able to learn the tasks, and have substantially improved their performance, often through many hours of practice. Examples of tasks and performance will be shown.

Conclusion: While simulations can tell us much about the ability of the visual system to adapt to extremely adverse visual conditions, they can only provide a meaningful contribution to prosthetic vision rehabilitation if the image transformations presented to the subjects match the reality experienced by the prosthesis wearer. To this end, the basic research of the transformed degenerated retina and the prosthesis recipients experience need to be fully integrated.

This lecture is available as video recording (Attachment 1 [Attach. 1]).