The traditional neural network is a set of connected processing elements. Each of these processing elements (or neurons) perform a mathematic function. These neurons then grouped together into layers.

A complete network is made up of two or more of these layers.

To perform, preprocessed data is fed to the various neurons in the first layer - the input layer. The range of outputs from each neuron becomes the domain for one or more neurons in the next layer via synapses (or connections).

And this is the important part: each synapse applies a weight to the data as it passes between processing elements. Although the path of the synapses are generally predefined, the weights on each synapse can be dynamically adjusted until the desired output is generated for a given input (aka training).

A neural net is an approximation of the way in which biological neurons work. Each neuron has weighted inputs. If an input is on (firing), its weight is summed with all the other firing inputs. If this sum, exceeds a threshold, the neuron itself will fire (the output will be on).

The original perceptron nets consisted of a single layer of perceptron units. The inputs to the net went to each of the the units and the outputs from the units make the output of the net. There was no inter-communication between the units.

It can be demonstrated that if a perceptron net can learn a function, it is guaranteed to learn it, regardless of the initial state of the neural net, prior to training.

Training is performed by putting a sample input on the net and obtaining an output. An error is calculated depending on how the output relates to the desired output for the sample input. The weights of the net are adjusted to reduce this error. This is done iteratively until the error is sufficiently low (this will be zero in single layer perceptron nets). This is typically done by a human observer (supervised learning) to guarantee that the net is taught correctly.

A perceptron net will be able to learn a function if it is linearly separable. This means that we can draw a straight line through the state space that seperates it into two classes.

When it was realised that the single layer net was incapable of solving the XOR problem, faith in the ability of the nets was lost. Experiments showed that if the inputs were replaced with a second row of units and the inputs were randomly connected to this pre-processing layer, the net would sometimes learn functions that were not linearly separable.

This showed that the addition of a second layer allowed more complex functions to be learnt but the method of training these nets (multi-layer perceptrons) took a lot longer to emerge.

The output layer of multilayer perceptrons is trained as with the single layer perceptron. The weights are adjusted according to the error in the result. The earlier layers are then adjusted relative to the error in the units that they feed their outputs into, multiplied by the weight of that link.

Multi-layer percpetron nets overcome the linear separability problem but there is no guarantee that the net will learn a function. This is due to local minima in the training process. The net can reach a point where any change to the weights will result in increased error, hence it stops training. Changing the initial weights of the net, prior to training can result in a different result after training is complete.

Another style of neural net is the Hopfield net. Hopfield nets can be used as content addressable memory (CAM). If you give a damaged or incomplete piece of data to the net, it can retrieve the clean prototype that it conforms to. The nets are based on the theory of the properties of magnetic materials, hence they are easy to study analytically. The net has an energy level at each point in time. Plotting energy against current input weights gives the energy surface. Stable states correspond to local minima in the energy surface.

The output of each unit in a Hopfield net is connected to the input of every other unit (but not to itself). The input to these nets consists of the state (on or off) of each unit. The neurons are randomly selected to update their state according to the weighted sum of its inputs. This is done until no neuron changes its state anymore. This stable state represents an output.

Training a Hopfield net involves setting the stable states to represent only those patterns whihc you wish to store, and no others.

Hopfield nets can also be used to solve optimisation problems, such as the Travelling Salesman Problem. Each stable state corresponds to a possible solution. The lower the solution is on the energy surface, the more optimal the solution.

A feedforward neural network is a directed acyclic graph, a network with two or more layers of nodes in which the signals travel unidirectionally, always from a layer to the next highest layer. If the network is fully connected, each node in a given layer has a weight connecting it to every node in the next layer. It is unusual and much more complicated to train a network if a node can be connected to a node that is not in the immediately succeeding layer.

Fully connected feedforward neural networks are useful for pattern classification and are often trained using the error backpropagation algorithm.

Introduction to Neural Networks

Neural networks are systems loosely modeled on the human brain. They are an attempt to simulate within specialized hardware or, more commonly, sophisticated software, the multiple layers and interactions of simple processing elements called neurons. Each neuron is linked to certain of its neighbours with varying coefficients of connectivity that represent the strengths of these connections. Learning is accomplished by adjusting these strengths to cause the overall network to output appropriate results.

Each neuron in a network either fires or does not depending on whether the sum of inputs into it are greater than or less than zero. This is worked out by multiplying the output (either 0 or 1) of each neuron leading into it by the weighting assigned to that path. Determining these weightings is a major part of designing neural networks.


Designing a Neural Network

Designing a neural network consists of:

  • Arranging neurons in various layers.
  • Deciding the type of connections among neurons for different layers, as well as among the neurons within a layer.
  • Deciding the way a neuron receives input and produces output.
  • Determining the strength of connection within the network by allowing the network to learn the appropriate values of connection weights by using a training data set.

Layers

Biologically, neural networks are constructed in a three dimensional way from small components. These neurons seem capable of nearly unrestricted interconnections, something which is not true in any man-made network. Artificial neural networks are the simple clustering of very primitive artificial neurons. This clustering occurs by creating layers, which are then connected to one another. How these layers connect may also vary. In essence, all artificial neural networks have a similar topological structure. Some of the neurons interface the real world to receive their inputs and other neurons provide the real world with outputs from the network. All the rest of the neurons are hidden from view but are nevertheless an integral part of its function.

The input layer consists of neurons that receive input from the external environment. The output layer consists of neurons that communicate the output of the system to the user or external environment. There are usually a number of hidden layers between these two layers with each additional layer adding complexity.

When the input layer receives the input its neurons produce an output, which becomes the input of the next layer of the system. The process continues until a certain condition is satisfied or until the output layer is invoked whose neurons fire their output to the external environment.

In order to determine the number of hidden neurons the network should have in order to perform its function quickly, trial and error is unfortunately often the best method. If you increase the hidden number of neurons too much you will get an over fit, that is the net will have problems with generalizing. The training set of data will be memorized, making the network useless on new data sets.


Communication and types of connections

Neurons are connected via a network of paths carrying the output of one neuron as input to another neuron. These paths is normally unidirectional although it is perfectly possible to have a two-way connection between two neurons. A neuron receives input from many neurons, but produces a single output, which is communicated to other neurons.

Each neuron in a layer may communicate with others, or might not have any intra-layer connections. The neurons of each layer are always connected to the neurons of at least another layer.

There are two types of connections between two neurons, excitatory or inhibitory. In the excitatory connection, the output of one neuron increases the activity or action potential of the neuron to which it is connected. When the connection type between two neurons is inhibitory, then the output of the neuron sending a message would reduce the action potential of the receiving neuron. The first is derived from a positive weighting between neurons and the second a negative one.


Inter-layer connections

In the most simple neural networks, neurons in each layer communicate only with those in other layers. There six major types of inter-layer connections which are each useful in particular circumstances.

  • Fully connected
  • Each neuron on the first layer is connected to every neuron on the second layer.


  • Partially connected
  • A neuron of the first layer does not have to be connected to all neurons on the second layer.


  • Feed forward
  • The neurons on the first layer send their output to the neurons on the second layer, but they do not receive any input back form the neurons on the second layer.


  • Bi-directional
  • There is another set of connections carrying the output of the neurons of the second layer into the neurons of the first layer. Feed forward and bi-directional connections can be fully or partially connected.


  • Hierarchical
  • If a neural network has a hierarchical structure, the neurons on each layer may only communicate with neurons on the next layer down.


  • Resonance
  • The layers have bi-directional connections, and they can continue sending messages across the connections a number of times until a certain condition is achieved.

    Intra-layer connections

    In more complex structures the neurons communicate among themselves within a layer. There are two types of intra-layer connections.

  • Recurrent
  • The neurons within a layer are fully or partially connected to one another. After these neurons receive input from another layer, they communicate their outputs with one another a number of times before they are allowed to send their outputs to another layer. Generally some conditions among the neurons of the layer should be achieved before they communicate their outputs to another layer.


  • On-center/off surround
  • A neuron within a layer has excitatory connections to itself and its immediate neighbours, and has inhibitory connections to other neurons. One can imagine this type of connection as a competitive group of neurons. Each group excites itself and its group members and inhibits members of other groups. After a few rounds of signal interchange, the neurons with an active output value will win, and are allowed to update their group's weightings.

    Log in or register to write something here or to contact authors.