In neural computing, we approximate biological neurons to rather more basic artificial neurons. Artificial neurons mimic the functionality of the biological neuron. In turn, these artificial neurons form the basis of any neural network model.

Neural computing is said to have started with the work of McCulloch & Pitts (published 1943), who set out to explain nervous systems in logical terms. They reduced the electro-chemical complexity of the biological neuron to the following form (biological equivalent in parentheses):

Each processing element, PE (cell body), sums the binary inputs (0/1 or +-1) coming though its input channels (dendrites). If this summation exceeded a threshold energy (the activation energy), the PE would respond by firing a binary value through its output channels (axon, terminal fibres). Otherwise, the PE remains inactive. Each output channel leads to another input channel via a weighted channel (synapse), which modifies the binary signals moving to the next neuron.

The M&P model describes a typical artificial neuron. Note that the M&P neuron works with binary (or bipolar) values for inputs and outputs, demonstrating the ‘all-or-nothing’ firing mechanism of biological neurons.

Since the time of M&P neurons, refinements have been made to this model. In order to define precisely our mathematical model of a neuron, let us first state the basic elements that must be incorporated:

1. A set of connecting links (the biological synapses) that connect and transfer the outputs of neurons to the inputs of other neurons. Each link has a corresponding synaptic weight, which alters the value (or strength) of the signal passing through it. For instance, the output signal from neuron xj connected to the input at neuron xi will use a link with a corresponding weight wij (note that the subscript first defines the neuron the link is connected to, and then the neuron that the link has come from). The value passing from xj would be multiplied by wij before being used as an input to xi (this modified value is hence referred to as a weighted input). Weights may take a positive or negative value, and links between various neurons are usually identified using their weights (for instance the link from x1 to x2 would be w21).

2. Each neuron contains a linear combiner, which has the effect of adding all the weighted input values fed into it. This value is commonly known as the induced local field, u of the neuron.

3. The induced local field is subjected to the neuron’s activation function, f(x), which could be any mathematical function. This serves to limit the amplitude of the output from the neuron (usually the amplitude range for the output of a neuron is restricted to [0,1] or [-1,1], and sometimes a simple binary/bipolar output). The final value, f(u), constitutes the neuron’s output.

This particular neuron, xk, has n inputs (x1 ... xn), which are being modified by the respective n weights (wk1 ... wkn). These weighted values are then summed in the linear combiner. This value, the induced local field, uk, is then passed onto the activation function, f(x), before being fed out as the output for the neuron, yk.

The function of each neuron can therefore be expressed as:

uk=SUM(wkjxj)

yk = f(uk)

uk is the induced local field of the neuron
wik is the weight of the connection from input xi to the neuron
N is the total number of inputs.
f(x) is the activation function of the neuron.
yk is the output from the neuron

In some neuronal models, a bias, bk, is added to the combined weighted input of the neuron (i.e. added at the summing stage). Therefore using bias is equivalent to the effect of applying an affine transformation to the induced local field, uk, of the neuron.

The model for a neuron with bias is:

uk=SUM(wkjxj + bk)

yk = f(uk)

Using bias allows the neuron to increase or decrease the net input for its activation function, depending on whether bk > 0 or bk < 0 respectively.

Log in or register to write something here or to contact authors.