Article
A discussion of the RGC classification toolbox and preliminary data
Search Medline for
Authors
Published: | November 30, 2017 |
---|
Outline
Text
Objective: To improve visual coding strategies for the next generation of the Tübingen subretinal implant, retinal ganglion cell (RGC) coding diversity shall be quantified by the development of a visual characterization toolbox, building on previous work from Baden et al. [2016, Nature].
Methods: Visual stimuli building on the Euler Lab’s stimulus set: ‘chirp’, ‘moving bars’, ‘spatiotemporal noise’, and ‘on-off flash’ are presented via a LightCrafter patterned light projector to induce RGC spike trains. RGC responses are recorded with a 60-channel planar microelectrode array (MEA). The stored raw data are processed using commercial spike sorting software and custom cell validation methods.
Results: A large database of RGC responses to the visual stimuli are sorted into cell types by adapting the analysis of Baden et al. to our dataset. Sorting is achieved by using principal components analysis (PCA) for feature extraction and using machine learning to fit a Mixture of Gaussians model on these features to identify clusters. Our toolbox takes as input trigger times, spike times and response annotations and outputs the RGC type(s) that best correspond to each data sample. These classifications are compared to each other and published to identify types that are easily and reliably identifiable. Finally, our toolbox calculates the probability of assigning each RGC sampled to each of the specific RGC types – clustered according to their functional diversity.
Discussion: This toolbox is an enabling method for identifying RGC type-specific activation through the presentation of specially designed electrical stimulation patterns.
Acknowledgments: Tistou und Charlotte Kerstan Stiftung and BMBF (FKZ: 031 A 308)