Artificial Neural Networks/Learning Paradigms
Learning Paradigms
editThere are three different learning paradigms that can be used to train a neural network. Supervised and unsupervised learning are the most common, with hybrid approaches between the two becoming increasingly common as well.
Supervised Learning
editSupervised learning is a technique where the input and expected output of the system are provided, and the ANN is used to model the relationship between the two. Given an input set x, and a corresponding output set y, an optimal rule is to be determined such that:
Here, e is an approximation error that needs to be minimized. The input values are provided to the network which produces a result. This result is compared to the desired result, and this error signal is used to update the network weight vectors. Supervised learning is useful when we want the network to reproduce the characteristics of a certain relationship
Unsupervised Learning
editIn unsupervised learning, the data and a cost function are provided that is a function of the system input and output. The ANN is trained to minimize the cost function by finding a suitable input-output relationship.
Given an input set x, and a cost function g(x, y) of the input and output sets, the goal is to minimize the cost function through a proper selection of f (the relationship between x, and y). At each training iteration, the trainer provides the input to the network, and the network produces a result. This result is put into the cost function, and the total cost is used to update the weights. Weights are continually updated until the system output produces a minimal cost. Unsupervised learning is useful in situations where a cost function is known, but a data set is not know that minimizes that cost function over a particular input space.