Adaptive Resonance Theory (ART) is a theory of neural network representation developed by Stephen Grossberg and Gail Carpenter of Boston University between 1976 and 1986. More basic neural networks had been developed in the thirty years before, but none of them had any inherent way to combine the plasticity and stability that make learning possible. That is, none had the ability to be plastic, to be able to learn, in response to significant stimuli, while still filtering out irrelevant events. Further, none could avoid washing away previously learned knowledge when new knowledge was learned. ART was developed as a response to these problems, a mathematically simple representation which still allowed for both stability and plasticity.

The basic model resembles the below illustrations, all copied from a 1986 paper by Grossberg. The attentional subsystem is where the system's long- and short-term memory are held, and the orienting subsystem serves to stabilize resonance in the STM and learning in the LTM. Because of their interaction, the system as a whole can be called self-organized or unsupervised, meaning that it will learn to categorize different inputs by itself rather than needing explicit training for each input -- this is the aforementioned plasticity, in action.

Interestingly, and no doubt due to this model being based on biological models, its combination of bottom-up and top-down processing looks remarkably similar to what some areas of the brain do. Even its fields and subsystems could roughly correspond to the layers of the cerebral cortex (see my writeup there), were one looking out for the relationship.



An overall illustration of the system:

+-------------------------------------------+---------------+
|           Attentional Subsystem           |   Orienting   |
|                                           |   Subsystem   |
|      + +----+  + +-------------------+    |               |
|   +---#|gain|---#|        F2         |#-----------+       |
|   |    +----+    +-------------------+    |  reset|       |
|   |                    |       #          |       |       |
|   |       +------------+      +|Y         |       |       |
|   |       |            |       |          |       |       |
|   |      -|           +|X      |          |       |       |
|   |       #            #       |          |       |       |
|   |  + +----+  + +-------------------+    |   -  +-+      |
|   +---#|gain|---#|        F1         |----------#|A|      |
|   |    +----+    +-------------------+    |      +-+      |
|   |                        #              |       #       |
|   |                       +|              |      +|       |
|   |                        |              |       |       |
|   +------------------------+----------------------+       |
+----------------------------|--------------+---------------+
                             |                               
                           Input                             

Legend:

  • # indicates an arrow leading into the location it contacts, through which a signal passes. A + near it indicates an excitory connection, whereas a - indicates inhibition.
  • F1 and F2 are fields of neurons with multiple layers as desired in the implementation. They serve as the short-term memory, by holding the pattern currently being evaluated.
  • X and Y are the high bandwidth connections between the two neural networks, which function (by holding patterns of weights) as the system's long-term memory.
  • gain increases the strength of the signal to the point that it can excite the entire network; inhibition on the gain blocks its effect. Activation of the gain is nonspecific -- any pattern coming through Input will do.
  • A and reset form a special signal line, which tells F2 to reset its neurons to no longer consider the pattern being considered, and in fact to inhibit the pattern from being triggered again.
  • Input is, of course, the incoming signal, often first normalized by a preprocessing field called F0



Looking at a few steps of the model's execution will illustrate how this all works together to let the system function and learn.

      + +----+  + +-------------------+                   
   +---#|gain|---#|     11011001      |                   
   |    +----+    +-------------------+                   
   |                            #                         
   |                           +|                         
   |                            |                         
   |                            |                         
   |                            |                         
   |  + +----+  + +-------------------+       -  +-+      
   +---#|gain|---#|     10011011      |---------#|A|      
   |    +----+    +-------------------+          +-+      
   |                        #                     #       
   |                       +|                    +|       
   |                        |                     |       
   +------------------------+---------------------+       
                            |                             
                          Input                           

In the first stage, an input (in this case an arbitrary binary code, 10011011) comes through the input line, generating the field's activity. It also activates the uninhibited gain control. This is important, because F1 functions by the 2/3rds rule, which states that for it to have any output, two of its three excitory inputs must be active. Notably, this keeps the system from "hallucinating" when only input fed back from F2 is present. If all of that didn't make much sense, keep reading, and it will in context.

The activity of a pattern being activated on F1 causes the inhibition of A's reset response, and generates a signal from F1 to F2. Because the connection is a neural net -- and one responsible for long-term memory, as you might recall -- that signal is multiplied by the connection weights, in a form of bottom-up processing. Multiplication results in a new signal at F2, represented by 11011001 above.

      + +----+  + +-------------------+                   
   +---#|gain|---#|     11011001      |                   
   |    +----+    +-------------------+                   
   |                    |       #                         
   |       +------------+      +|                         
   |       |            |       |                         
   |      -|           +|       |                         
   |       #            #       |                         
   |  + +----+    +-------------------+       -  +-+      
   +---#|gain|    |     10011011      |---------#|A|      
   |    +----+    +-------------------+          +-+      
   |                        #                     #       
   |                       +|                    +|       
   |                        |                     |       
   +------------------------+---------------------+       
                            |                             
                          Input                           

On the next step F2 also has its 2/3rds rule satisfied, so it puts out a signal representing top-down processing. If the top-down pattern doesn't match, a new pattern is generated in F1 which is missing the features the known pattern doesn't have (symbolized by strikeout digits in the illustration). Notably, this means that activity leaving F1 will drop, leading to less inhibition on A. Signal from the F2 output also inhibits the F1 gain, so that field has less of a chance of satisfying its 2/3rds rule.

      + +----+  + +-------------------+                   
   +---#|gain|---#|     11011001      |                   
   |    +----+    +-------------------+                   
   |                    |       #                         
   |       +------------+      +|                         
   |       |            |       |                         
   |      -|           +|       |                         
   |       #            #       |                         
   |  + +----+    +-------------------+       -  +-+      
   +---#|gain|    |     10011011      |---------#|A|      
   |    +----+    +-------------------+          +-+      
   |                        #                     #       
   |                       +|                    +|       
   |                        |                     |       
   +------------------------+---------------------+       
                            |                             
                          Input                           

At this point, the ball is in A's court. A decides based on the strength of input received from F1 whether or not to send the reset signal to F2, again based on its 2/3rds rule. A has a vigilance criterion which must be met by the signal strength from F1 -- if the criterion is not met (i.e. the signal is too weak) the reset signal will be sent; otherwise the activation resonates between F1 and F2, and the signal is considered to have been accurately categorized. For the sake of illustration, we'll say that the criterion is not met, so the reset line above is triggered.

With the reset line triggered, the previous F2 pattern is inhibited, keeping it from being triggered again until the system is back in a stable state. Without inhibition, the weights would cause the same pattern to be triggered over and over, putting the system into an infinite loop.

      + +----+  + +-------------------+                   
   +---#|gain|---#|     10001011      |                   
   |    +----+    +-------------------+                   
   |                            #                         
   |                           +|                         
   |                            |                         
   |                            |                         
   |                            |                         
   |  + +----+  + +-------------------+       -  +-+      
   +---#|gain|---#|     10011011      |---------#|A|      
   |    +----+    +-------------------+          +-+      
   |                        #                     #       
   |                       +|                    +|       
   |                        |                     |       
   +------------------------+---------------------+       
                            |                             
                          Input                           

After the reset occurs, a new pattern is searched for and found at F2, and a new cycle begins. If A's vigilance condition is never met by any of the patterns generated, then a new category must be established in the bottom-up and top-down connections. As those are the system's LTM, the new pattern will be committed to it, and thus be identifiable in future inputs. This part of the system isn't represented in the illustration, but can be thought of as a line measuring the inhibition of F2. When that field is completely inhibited, i.e. there are no more possible patterns to match, that line goes active, signalling the system to commit the pattern to empty nodes on the two main connections.

Log in or register to write something here or to contact authors.