gms | German Medical Science

Artificial Vision 2017

The International Symposium on Visual Prosthetics

01.12. - 02.12.2017, Aachen

Visual prostheses with higher functionality by simplifying the visual input

Meeting Abstract

  • Gislin Dagnelie - Johns Hopkins University, Baltimore, USA
  • M. Barry - Johns Hopkins University, Baltimore, USA
  • A. Caspi - Second Sight Medical Products, Sylmar, USA
  • A. Roy - Second Sight Medical Products, Sylmar, USA
  • P. Gibson - Minnesota Health Solutions & Advanced Medical Electronics Corp, Minneapolis, USA
  • K. Kramer - Minnesota Health Solutions & Advanced Medical Electronics Corp, Minneapolis, USA

Artificial Vision 2017. Aachen, 01.-02.12.2017. Düsseldorf: German Medical Science GMS Publishing House; 2017. Doc17artvis35

doi: 10.3205/17artvis36, urn:nbn:de:0183-17artvis367

Published: November 30, 2017

© 2017 Dagnelie et al.
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License. See license information at http://creativecommons.org/licenses/by/4.0/.


Outline

Text

Background: Retinal prostheses have not lived up to our highest hopes, or even more modest aspirations, primarily due to the profound reorganization of the degenerated retina and our inability to selectively stimulate inner retinal neurons, individually or in small clusters. Barring a major breakthrough towards more selective stimulation, our best hope for improved functionality is through simplification and selective processing of the imagery presented to the prosthesis wearer.

Methods: We followed 3 lines of investigation to pre-process and simplify the input image by using: 1) a thermal imager that allows the user to select objects and persons on the basis of their temperature contrast relative to their surroundings; 2) a stereo camera pair and depth filtering to eliminate all information coming from distances outside the range of interest; 3) an interactive recognition system to locate and highlight objects of interest. Here we report on two of these methods: results collected with prototype thermal and depth filtering systems, in Argus II users.

Results: Thermal Imaging: Thermal images were presented win laboratory and real-life situations. In the lab, subjects were asked to indicate indicated which 3 out of 6 chairs around a conference table were occupied; subject performed 30-50% better with thermal than with visible light imaging, and responded in half the time. In Object Localization and counting tasks, the position(s and number) of cups of hot water had to be reported; subjects performed near chance with visible light and near 100% correct with thermal imaging. In real-world tests, subject were asked to locate people seated in a lobby and identifying the timing and direction of people walking by. Both seated and walking people were detected more reliably and quickly by one subject, but not by another. Distance filtering: Subjects performed detection and discrimination tests at far (1 or 2 chairs at 1–3 m) and close (canister, cup, or bag of candy at 0.3–0.7 m) ranges; in either case they performed 3 tasks: presence (Y/N, 2-AFC, 10 trials), counting and lateral position (Left/Right/Both, 3-AFC, 9 trials), and depth discrimination (R closer/farther than L, 2-AFC, 6 trials, depth separation varied to estimate thresholds), with conditions presented in random order. The presence and position tests were performed with and without distance filtering; during the discrimination tests the subject used a slider adjusting the center distance of the filter, allowing comparison of the objects. Four Argus II users were 100% correct on the far presence test with distance filtering; without filtering, three subjects performed at chance, while the 4th scored 80% (n.s.) but took 49 s (17 s with filtering). In the near presence test subjects were 90-100% correct with, vs. at chance without, filtering. One subject was able to perform the far position test both with and without filtering, while another was at chance in both conditions; for the near position test all subjects were above chance (67–89%) with, and at or near chance (22–56%) without filtering. For the depth discrimination at 1.50 m, the minimum depth difference successfully detected was 10–20 cm, while at 40 cm a depth difference of 2.5 cm was reliably detected.

Conclusions: Argus II users can benefit from thermal imagery for both social interactions and temperature-based object localization, as well as from depth-filtered imagery by detecting the presence and position of objects within a range of interest and by estimating relative distances of objects to within 10%. It is likely that thermal imaging performance will improve as subjects become more familiar with its representation of the world. A production version of a combined thermal/depth sensitive system is under development.