The Jumbo program, as described by Douglas Hofstadter in fluid concepts and creative analogies, there was a very interesting finding: the program, which simulated how human intelligence saw analogies in a specific setting of the alphabet, was able to come up with surprisingly deep analogies. For instance, given the string ABC:XYZ, the string AABC had several possible counterparts; XXYZ was the one most frequently picked by people given this analogy, but a better answer, XYZZ, was picked by some of the participants, showing a relationship between A as the first letter and Z as the last. The program does not find this as frequently as it finds XXYZ, but when it does find it, it recognizes it as a better answer.

The question that remains is; can the computer only come up with insights that are hidden by the prejudice of the operator, so that connections that the computer makes are only mirrors of what we as the programmers already knew, or can the computer actually think?

Several peices of research seem to hint that any "insight" that a computer has (until now, at least) is a function of the input. Certain analogy programs (SME and ACME) from the mid 80's to mid 90's that were supposed to be incredibly gifted at finding deep relationships are clear examples of this phenomenon: A relationship between the flow of liquid from a full container into a less full container to the flow of heat from hotter to colder objects was a clear example of this "intiution" mirroring input. The input was essentially a graph with labeled vertices and edges, where the computer tried matching vertices and edges that have the same name, and finding how other points on the graph were then related. In the aforementioned example, the edges onthe two graphs labeled "flow" were matched, as were the edges labled "more," corrresponding to volume and heat, respectively. The claim that the program "understood" the relationship was clearly ridiculous, and in fact counterproductive, as the entire program was an application of a very important idea in pattern recognition, but only peripherally useful in AI research.

More interestingly, a program named TR that was intended to do "mathematical research" given only the concept of a set, and ideas such as recursion and uniqueness. It managed to "discover" addition, multiplication, exponentiation, and even prime numbers and several more recent ideas in number theory. This was, of course, with the frequent weeding out of unproductive ideas, but more importantly, once it had gotten to where the concept of set no longer implies the types of operations that can be performed, it ran out of ideas. It seems that a enough of the parameters of the program were influenced, unsurprisingly, by the way that modern mathematicians think about sets and number theory that it was really only going along the path laid for it. The program cannot do more than that, because the programming was done by people who didn't really have any new methods to introduce to mathematics, and therefore their program was simply a confirmation that the methods used in normative mathematics are a natural development of the particular interests that the program was given.

It would be interesting to see what would happen with a similar program that was given a relatively newer field, to see if it came up with any new ideas, or even filled in the gaps where no significany work had yet been done. It seems that the use of Artificial Intelligence in that type of situation would be able to flesh out details that people have not yet use established procedures to find. However, the field of artificial intelligence is, as of yet, unable to produce whatever it is that is unique about intelligence; that a person can actually understand something more than what they have been given originally.