Sensory Systems/Computer Models/Simulations of Retinal Function

The following section describes a realistic simulation of the activity of retinal ganglion cells, using the computer language Python and the image- and video-processing package OpenCV. After a summary of the main features of the retina that are important for the simulation, the installation of the required software packages is described. The rest of this section introduces the simulation and the corresponding parameters.

File:Home-image.png
Fig. 1: Example of retina simulation on famous Lena image

Physiology of Retina edit

When light reaches the back of the eye, it enters the cellular layers of retina. The cells of retina that detect and respond to light photoreceptors are located at the very back of retina. There are two types of photoreceptors: Rods and Cones. Rods allow us to see in dim light but don’t allow the perception of color. Cones on the other hand allows us to perceive color under normal lighting conditions. In most of the retina, Rods outnumber Cones. The area of the retina that provides highest acuity vision is at the center of our gaze. When light hits photoreceptor, it interacts with the molecule called Photopigment, which begins the chain reaction that serves to propagate the visual signal.  The signal is transmitted to cells called Bipolar cells which connects photoreceptors to Ganglion cells. Bipolar cells pass signals to Ganglion cells which leave the eye in a large cluster and in an area called the Optic Disc. After leaving the retina the Ganglion cells are called Optic Nerve. The Optic Nerve carries the visual information towards the brain to be processed. There are two other types of cells in retina that need to be mentioned: Horizontal and Amacrine cells. Horizontal cells receive input from multiple photoreceptor cells. Amacrine cells receive signals from bipolar cells and are responsible for regulation and integration of activity in Bipolar and Ganglion cells [1]. More illustrations of the retina can be found here.

 
Conduction time in parvo and magnocellular layer.

The retina cells form two cell layers: the Outer Plexiform Layer (OPL) and the Inner Plexiform Layer (IPL). Each layer is modeled with specific filters. At the IPL level which constitutes the retina output, different channels of information can be identified. We focus here on the known: the Parvocellular channel (Parvo) dedicated to detail extraction and the Magnocellular channel (Magno) dedicated to motion information extraction. In the human retina, the Parvocellular channel is most present at the fovea level (central vision) and the Magnocellular channel is most important outside of the fovea (peripheral vision), because of the relative variations of specialized cells [2]. Interestingly, the Parvocellular signals arrive later than Magnocellular signals as shown in the image on the left[3] .

To understand the function performed by Magno and Parvo cells, scientists performed lesion in the pathways that carry information to the brain and in the animal's performance due to lesion. When cells in the Parvocellular layers of a monkey’s lateral geniculate nucleus are destroyed, performance deteriorates on a variety of tasks, such as color discrimination and pattern detection [4]. The most informative result is that when neurons in the Magnocellular layers are destroyed, the animal is less sensitive to rapidly flickering low spatial frequency targets. This loss of sensitivity shows that the Magnocellular pathway contains the best information which improves our ability to perform tasks requiring high temporal frequency information [3]. The information carried by the neurons in the Magnocellular pathway provide the best information in the high temporal and low spatial frequency components of the image. Performance on motion tasks and other tasks that require this information is better when the Magnocellular pathway signal is available. The signals are not absolutely necessary to perform the task. Performance deficits on motion tasks could be compensated for simply by increasing the stimulus contrast; that is, one can compensate for the loss of information on the Magnocellular pathway by improving the quality of the information on the Parvocellular pathway. Hence, the Magnocellular pathway contains information that is particularly useful for visual tasks, such as motion perception. Also, Parvo and Magno pathways are known relating to the neurological diseases such as Alzheimer’s, Parkinson's diseases etc. More a more informative summary in [5].

The simulations below have been done using OpenCV library in Python. The outputs of the famous Lena image from Magno and Parvocellular layers shown in Fig 1. We can see that the output from Parvo cells contains color and pattern information and that from Magno cells contains contours with low spatial frequency. It is important to describe the physiology of the retina into a mathematical model in order to boost the development of the retinal implants and vision research. Some of the interesting articles on retinal implants in [6] and webvision.

The following sections describe in detail about the package, parameter settings and demonstrations.

Installation Guide and Source Code edit

 
Horse riding
 
Horse riding parvo

This package is currently hosted at GitHub, click here to see the source code. This package supports both Python 2.x and 3.x.

 
Horse riding magno

For installing and running this package successfully, we strongly recommend Anaconda. Anaconda will mitigate many problems in updating and installing 3rd party packages.

Requirements edit

  • OpenCV3: for image and video processing. There are ways of installing OpenCV. First, you can follow online documentation at OpenCV's website and build OpenCV from source (in this case, make sure you turn on Python option and build extra modules). Anaconda users can directly install the package from binary:
    conda install -c menpo opencv3=3.1.0
    
    You can type above command in a Linux terminal or Windows console, Anaconda will install this package automatically.
  • PyQtGraph: for design and manage GUI. We recommend that you install this package by using Anaconda's build:
    conda install pyqtgraph
    
    You might not want to directly install this package from PyPI since there are few tricky points you need to take care.
  • Other required packages, such as numpy, can be installed by PyPI and will be checked during package installation

Installation edit

From PyPI (Recommend) edit

If you've installed above mentioned packages, you can grab the latest stable version from PyPI via:

pip install simretina

From GitHub edit

You should make sure there is Git on your machine. For Windows user, please make sure Git executable is in your path.

After you installed OpenCV3 and pyqtgraph, you can install the retina simulation package by:

pip install git+git://github.com/duguyue100/retina-simulation.git \
-r https://raw.githubusercontent.com/duguyue100/retina-simulation/master/requirements.txt

pip grab the bleeding edge version of the package and install it automatically.

Start Retina Viewer edit

Retina Viewer is the central component of the entire package. It allows you to play with the retina model with different parameter settings. Running the viewer also allows you to validate your installation.

Assume you've installed the package successfully from above instructions, you can start a terminal/console and type:

retina_viewer.py

Above command is not a file name instead of a command. Once you installed, this command is searchable by your terminal/console.

For Windows user, your system either start the viewer right away or it will ask for the program that opens the file, you should then find and choose python.exe from your Anaconda installation. If it's not responding, you may want to open a new console and type above command again.

Note that FFMPEG will be downloaded automatically at the first time if it's not detected by the package.

Software GUI Explained edit

 
Retina viewer GUI

retina_viewer.py (as shown left) currently is divided in 5 panels. At top, there are three displayers, from left to right, that display original image/video, Parvocellular pathway output and Magnocellular pathway output. At bottom left, there are functions that allow you to play with viewers in different modes. At bottom middle, there are parameters to define Parvocellular pathway at Inner Plexiform Layer (IPL) and Outer Plexiform Layer (OPL). At bottom right, there are parameters to define Magnocellular pathway at IPL.

You can change the parameter settings during simulation. However since the retina model is also considering temporal dimension, a short adaption is required after a change of parameters. The details for the parameter settings can be found in next section.

From the bottom right panel, first, you need to select the Operation Mode, the viewer provides 5 modes:

Operation Mode Description
Image Select image from Builtin Examples. The viewer has 2 image examples: Lena and Dog. The Lena image is the standard test image, and the Dog image is taken from an open image recognition dataset Caltech-256.
Image (External) Supply an external image, popular image formats are supported. You can click the button Open Image/Video to select.
Video Select from 2 video examples from Builtin Examples: Horse Riding and Taichi. These examples are taken from UCF-101 Action Recognition Dataset.
Video (External) this mode requires an external video from your system. You can click the button Open Image/Video to select.
Webcam if you have a webcam on your machine, then the viewer will collect the video stream from the webcam and process it.

Parameters and Settings edit

Brief Mathematical Description of the Retina Model edit

 
Computational retina model

In this section, we briefly review the retina model that is presented in [7]. At right a conceptual illustration is shown. First, the illumination of the input frame is normalised by the photoreceptors, and then processed by Outer Plexiform Layer (OPL) and Inner Plexiform Layer (IPL). The output of IPL layer goes into two channels: Parvo channel that extracts details and Magno channel for motion analysis.

Illumination Variation Normalisation Using Photoreceptors edit

The following equations are used to adjust input luminance   into adjusted luminance   in the range of   where   represents the maximum allowed pixel value in the image (in this case, 255 for 8-bits images. This value can be different if the image is in different coding scheme.) according to the fact where photoreceptors have the ability to adjust their sensitivity with respect to the luminance of their neighbourhood. 

 

where   is a static compression parameter,   is a compression parameter that is linearly linked to the local luminance   of the neighbourhood of the photoreceptor.   is computed by applying a spatial low pass filter to the image, and it's achieved by the implementing horizontal cells network.

This model enhances contrast visibility in dark areas while maintaining it in bright areas.

Outer Plexiform Layer edit

A model of the OPL describes the effect of horizontal cells on the signal originating in the photoreceptors. OPL layer can be modelled with a nonseparable spatio-temporal filter where   is the spatial frequency and   is the temporal frequency, this filter is characterised by:

 

where

 

 

The above two equations can be viewed as two low-pass spatio-temporal filters that model the photoreceptors network   and horizontal cells network  . The output of the network   contains only the very low spatial frequency of the image, it is then used as the local luminance  .   is gain of the filter  ,   is the gain of the filter  .   and   are temporal constants allowing the temporal noise to be minimised.   and   are spatial filtering constants where   sets the high cut frequency and   sets the low cut frequency.

The difference between   and   can be represented by two operators BipON and BipOFF, respectively giving the positive and negative parts of the difference between the   and   images. This models the action of the bipolar cells which divides OPL outputs in two channels, ON and OFF. The OPL filter can remove spatio-temporal noise and enhance contours.

Inner Plexiform Layer and Parvo Channel edit

The ganglion cells (midget cell) of the Parvo channel receive the contour information coming from the BipON and BipOFF outputs of the OPL.

Here we can apply the same local adaption law on BipON and BipOFF outputs as we did for the photoreceptors, therefore further enhance the contours information. These adapted outputs are finally combined together and sending out as Parvocellular pathway output.

Inner Plexiform Layer and Lagno Channel edit

On the Magnocellular channel of the IPL, amacrine cells act as high pass temporal filters.

 

where   is the discrete time step. and   is the time constant of the filter (2 time steps in default configuration). This filter enhances areas where changes occur in space and time.

The amacrine cells ( ) are connected to the bipolar cells (BipON and BipOFF) and to the "parasol" ganglion cells. As on the Parvo channel, the ganglion cells perform local contrast compression, but also act as a spatial low pass filter. This result is a high pass temporal filtering of the contour information which is smoothed and enhanced (by low pass filter and local contrast compression). As a consequence, only low spatial frequency moving contours are extracted and enhanced (especially contours perpendicular to the motion direction).

Summary of Parameter and Settings edit

The following parameter settings are mainly taken from original Retina model that developed in OpenCV. Note that the parameters here are quite different from the settings that were used in [7]. However, fine-tuned parameter settings would produce better visualisation than what paper claims.

Parameters for Parvocellular Pathway at IPL and OPL edit

Parameters Default Settings Bio-"Realistic" Settings Description
Color Mode Color Color Specify if process frames as color image or gray level image.
Normalize Output Yes Yes If yes, the output of Parvo output is then rescaled between 0 to 255
Photoreceptors Local Adaption Sensitivity ( ) 0.75 0.89 The range of photoreceptors sensitivity is 0~1 where more log compression effect if value increases.
Photoreceptors Temporal Constant

( )

0.9 0.9 Use to cut high temporal frequency (noise, fast motion, etc), the unit is frames. Though it should be integer logically, however it can work with float settings.
Photoreceptors Spatial Constant ( ) 0.53 0.53 Use to cut high spatial frequency (noise, thick contours, etc), unit is pixel. Though it should be integer logically, however it can work with float settings, as the default value is 0.53.
Horizontal Cells Gain ( ) 0.01 0.3 Gain of horizontal cells network, if 0, then the mean value of the output is zero, if the value near 1, then the luminance is not filtered.
Horizontal Cells Temporal Constant

( )

0.5 0.5 Use to cut low temporal frequency (local luminance variations), unit is frames. It can also receive float number settings.
Horizontal Cells Spatial Constant ( ) 7 7 Use to cut low spatial frequency (local luminance), unit is pixels. The value is also used for local contrast computing when computing the local contrast adaption at ganglion cell level.
Ganglion Cells Sensitivity ( ) 0.75 0.89 The compression strength of the ganglion cells local adaption output, the range is 0.6 to 1 for best results, a higher value increases more low value sensitivity and the output saturates faster.

Parameters for Magnocellular Pathway at IPL edit

Parameters Default Settings Bio-"Realistic" Settings Description
Normalize Output Yes Yes If yes, then the output of Magno output is rescaled between 0 to 255 (recommend), you probably won't see anything if you choose no.
Low Pass Filter Gain for Local Contrast

Adaption at IPL Level (parasolCells_beta)

( )

0 0 For ganglion cells local adaption, typically 0.
Low Pass Filter Time Constant for

Local Contrast Adaption at IPL Level (parasolCells_tau)( )

0 0 For ganglion cells local adaption, the unit is frames, typically the response is 0 (immediate response)
Low Pass Filter Spatial Constant for

Local Contrast Adaption at IPL Level (parasolCells_k)( )

7 7 For ganglion cells local adaption, unit is pixels.
Amacrine Cells Temporal Cut Frequency ( ) 2 2 The time constant of first order high pass filter of the Magnocellular pathway, unit is frames.
V0 Compression Parameter ( ) 0.95 0.95 The compression strength of the ganglion cells local adaption, the range is 0.6 to 1 for best results, a higher value increases more low value sensitivity and the output saturates faster.
Temporal Constant for Local Adapt Integration ( ) 0 0 The temporal constant of the low pass filter involved in the local computation of the local "motion mean" for the local adaption computation
Spatial Constant for Local Adapt Integration ( ) 7 7 The spatial constant of the low pass filter involved in the local computation of the local "motion mean" for the local adaption computation

How the Viewer is Written edit

The central simulation component is the Retina Simulation Viewer. This viewer is written in a straightforward method where we wrote a static GUI interface and force the entire window update whenever there is a change in parameters or frames.

The GUI is completely written with pyqtgraph and pyqt, if you are familiar with Qt's framework, you can actually design a GUI you want and then export it to a XML annotated description file, then you can also use pyqt to convert the description into a Python class. Here, we hand-coded the entire GUI, therefore if you look at the code, you will find a large portion of code is configuring graphic modules.

The most important part of the code is the update function in the script, the update function hooks with the GUI window and force it to update whenever it's necessary. At each step, the update function, according to the configuration, would check if the configuration is changed by comparing a config dictionary from previous time step, and then decide if the program reinitialises the data source and retina model. The update function will always produce a frame that fits the current configuration. This frame is then processed by the given retina model and display it.

References edit

  1. Helga Kolb. How the retina works. American scientist, 91(1):28{35, 2003
  2. D.M. Dacey, Higher order processing in the visual system, in: Ciba Foundation Symposium, vol. 184, Wiley, Chichester, 1994, pp. 12–34.
  3. a b Wandell, Brian A; Foundations of Vision. 1995.
  4. Pasternak, Tatiana, and William H. Merigan. "Motion perception following lesions of the superior temporal sulcus in the monkey." Cerebral Cortex 4.3 (1994): 247-259.
  5. Yoonessi, Ali, and Ahmad Yoonessi. "Functional assessment of magno, parvo and konio-cellular pathways; current state and future clinical applications." Journal of ophthalmic & vision research 6, no. 2 (2011): 119-126. Harvard
  6. Dagnelie, G. (2012). Retinal implants: emergence of a multidisciplinary field. Current opinion in neurology, 25(1), 67-75.
  7. a b A. Benoit, A. Caplier, B. Durette, J. Herault, Using Human Visual System modeling for bio-inspired low level image processing, Computer Vision and Image Understanding, Volume 114, Issue 7, July 2010, Pages 758-773, ISSN 1077-3142, http://dx.doi.org/10.1016/j.cviu.2010.01.011.