Learning algorithm for a neural network. Named in honour of the neuropsychologist Hebb:
    When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A's efficiency as one of the cells firing B, is increased.

- Hebb, The Organization of Behaviour

What this means is the weights between synapses increase if the two neurons are activated simultaneously and it is reduced is they are activated asynchronously.

H E B B I A N   L E A R N I N G

a principle of synaptic modulation.

In 1949, in his book 'The Organization of Behavior', Donald Hebb articulated a principle of synaptic modulation that would kickstart research into neural nets. His postulate specified how much the strength of a connection between two neurons should be altered according to how they're both firing at the time. Hebb's original principle was essentially like so: if one neuron is stimulating some other neuron, and at the same time that receiving neuron is also firing, then the strength of the connection between the two neurons is increased (and vice versa; that is, if one is firing and the other isn't then the connection strength is decreased).

When looking at Hebb's principle from the point of view of artificial neurons and artificial neural networks, we can describe it as a method of determining how to alter the weights between model neurons. The weight between two neurons will increase if the two neurons activate simultaneously; it is reduced if they activate separately. Nodes which tend to be either both positive or both negative at the same time will have strong positive weights while those which tend to be opposite will have strong negative weights. To put it more simply: fire together, wire together. This original principle is perhaps the simplest form of weight selection there is. Today the term 'hebbian learning' generally refers to some form of mathematical abstraction of the original principle proposed by Webb. Essentially, in hebbian learning weights between the learning nodes are adjusted so that each weight better represents the relationship between these nodes.

Hebbian learning is fairly simple; it can be easily coded into a computer program and used to update the weights for some network. Its simplicity means there are only a few applications for plain old hebbian learning; however, many more complicated learning methods can be considered to be somewhat Hebbian in nature.

A good example of hebbian learning in a biological system is one based around a Pavlovian-style experiment. It focuses on associative learning. In the example we consider a fluffy little bunny rabbit responding to certain stimuli: the unconditioned stimulus of a puff of air elicits the unconditioned response of an eye blink; in addition to this, the conditioned stimulus of some aural tone will eventually also generate a blink of an eye as a response. To begin with, if subjected to a puff of air towards its eye the rabbit will quite naturally blink. At this point it won't blink when it hears the sound, however; it doesn't care about that. However, if we pair the tone up with the air puff several times, i.e. activate them both at the same time, the animal will become conditioned to associate the sound as something else worthy of blinking. The neuron for the tone and the neuron for blinking have both been firing at the same time, thus strengthening the connection between the two; from this point on the animal will be conditioned to blink whenever it hears the tone, even if there is no puff of air to match. The neurons have 'learnt' some association between the two events (of sound and blink).

Now let us consider hebbian learning in a more artificial system (e.g. one composed of McCulloch-Pitts neurons). If the input to one particular MP neuron was repeatedly and persistently 1, with the output from that same neuron consistently 1 also, the weight of that synapse would be strengthened. Such learning is useful, as these kinds of rules can 'teach' a neuron how to produce a desired output from given inputs. As cells' activity get more and more correlated, the synapses connected them will become stronger . These changes in synapses would appear to be the basis of memory; we can conceive a neural network as a collection of cells that together represent the storage of a memory by serving as a path of least resistance when a distinct experience recurs. [Cohen]

Hebbian learning is similar in both biological and artificial systems—not surprisingly, as the artificial is a model based upon the observations of such things in nature. As such, we can see synaptic plasticity in both; that is, the ability for learning and emergent behaviour. Hebb's key idea was that stimuli and responses are causally related, and that causal interactions between pre- and post-synaptic cells should enhance the connections among them. [Cohen] We can see this in both biological systems and artificial systems of our own design. However, the complexity and the subtlety of the weight alteration will be more apparent in the natural systems, as ours are more often than not simple models.

NOTE: The Hebbian learning rule works well as long as all the input patterns are orthogonal or uncorrelated. The requirement of orthogonality places serious limitations on the Hebbian Learning Rule. A more powerful learning rule is the delta rule, which utilizes the discrepancy between the desired and actual output of each output unit to change the weights feeding into it.

Bibliography

Cohen, Netta. (2003). Artificial Neural Networks: The Rise & Fall of the Perceptron. Lecture Slides.
Retrieved 16/03/2003 from: http://www.comp.leeds.ac.uk/ar23/syllabus/topic2/ar23_lecture4.ppt

Knott, Alistair & McCallum, Simon. (2002). The Real Brain and How (we think) it Works.
Retrieved 16/03/2003 from: www.cs.otago.ac.nz/coursework/ cosc343/Lectures/PDF/Lec6.pdf


version 0.1 / 17032003
  • created node from piece of university work
  • need to check a few facts

Log in or register to write something here or to contact authors.