Date of Award

12-2013

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Dr. Mahadevan Subramaniam

Second Advisor

Dr. Parvathi Chundi

Third Advisor

Dr. Eyal Margalit

Abstract

Evolutionarily, human beings have come to rely on vision more than any other sense, and with the prevalence of visual-oriented stimuli and the necessity of computers and visual media in everyday activities, this can be problematic. Therefore, the development of an accurate and fast retinal prosthesis to restore the lost portions of the visual field for those with specific types of vision loss is vital, but current methodologies are extremely limited in scope. All current models use a spatio-temporal filter (ST), which uses a difference of Gaussian (DOG) to mimic the inner layers of the retina and a noisy leak and fire integrate (NLIF) unit to simulate the optical ganglion. None of these processes show how these filters are mapped to each other, and therefore simulate the interaction of cells with each other in the retina.

The mapping is key to having a fast and efficient filtering method; one that will allow for higher-resolution images with significantly less hardware, and therefore power requirements. The focus of this thesis was streamlining this process: the first major portion involved was applying a pipelining system to the 3D-ADoG, which showed some significant improvement over the design by Eckmiller. The major contribution was the mapping process: three mapping schemes were tried, and there was a significant difference found between them. While none of the models met the timing requirements, the ratios for speedups seen between the methods was significant.

Evolutionarily, human beings have come to rely on vision more than any other sense, and with the prevalence of visual-oriented stimuli and the necessity of computers and visual media in everyday activities, this can be problematic. Therefore, the development of an accurate and fast retinal prosthesis to restore the lost portions of the visual field for those with specific types of vision loss is vital, but current methodologies are extremely limited in scope. All current models use a spatio-temporal filter (ST), which uses a difference of Gaussian (DOG) to mimic the inner layers of the retina and a noisy leak and fire integrate (NLIF) unit to simulate the optical ganglion. None of these processes show how these filters are mapped to each other, and therefore simulate the interaction of cells with each other in the retina.

The mapping is key to having a fast and efficient filtering method; one that will allow for higher-resolution images with significantly less hardware, and therefore power requirements. The focus of this thesis was streamlining this process: the first major portion involved was applying a pipelining system to the 3D-ADoG, which showed some significant improvement over the design by Eckmiller. The major contribution was the mapping process: three mapping schemes were tried, and there was a significant difference found between them. While none of the models met the timing requirements, the ratios for speedups seen between the methods was significant.

Despite the speedups and potential power savings, none of the other papers made specific mention of using any mapping schemes, nor how they improve both the speed and quality of the output images. The closest reference: a very vague reference to the amount of overlap as a tunable feature. Nevertheless, this is a key feature to developing the next generation prosthesis, and the manner in which the output from the ST filter bank is mapped seems to have a significant effect on speed, quality, and efficiency of the entire system as a whole.

Comments

A Thesis Presented to the Department of Computer Science and the Faculty of the Graduate College University of Nebraska In Partial Fulfillment Of the Requirements for the Degree Master of Science University of Nebraska at Omaha. Copyright 2013 Jonathan Gesell.

COinS