Artificial Neural Networks/Recurrent Networks

Recurrent Networks

edit

In a recurrent network, the weight matrix for each layer l contains input weights from all other neurons in the network, not just neurons from the previous layer. The additional complexity from these feedback paths can have a number of advantages and disadvantages in the network.

Simple Recurrent Networks

edit

Recurrent networks, in contrast to feed-forward networks, do have feedback elements that enable signals from one layer to be fed back to a previous layer. A basic recurrent network is shown in figure 6. A simple recurrent network is one with three layers, an input, an output, and a hidden layer. A set of additional context units are added to the input layer that receive input from the hidden layer neurons. The feedback paths from the hidden layer to the context units have a fixed weight of unity.

A fully recurrent network is one where every neuron receives input from all other neurons in the system. Such networks cannot be easily arranged into layers. A small subset of neurons receives external input, and another small subset produce system output. A recurrent network is known as symmetrical network if:

 
 
 

Elman Network

edit

An Elman Network is a special case of a Simple Recurrent Network (SRN) with four layers: An input layer, an output layer, a hidden layer and a context layer. The context layer feeds the hidden layer at iteration N with a value computed from the output of the hidden layer at iteration N-1, providing a short memory effect. Elman networks are used, for example, for predicting series of values.


 
Elman network with 2 neurons in each layer. The weight of the blue connections is always equal to 1, the weight of the red connections can be trained.