A chatbot (or chatterbot) is a computer program that attempts to emulate a person who is conversing in natural language. The term is a meld of chat (chatter) and robot.

Where they came from

The original chatbot was the ELIZA program, which emulated a session between a psychiatrist and patient. Despite the program's surprising simplicity, Eliza made a big impression back in the mid-60's, when the computer was new and people were enthusiastically in awe of technology (see Eliza). One factor in its apparent realism is that the psychiatrist just tries to elicit revealing talk by the patient, rather than to actually converse interactively. Once chatbot designers stepped beyond the extremely limited speech style of that one kind of psychiatric session, however, they found it very difficult to create convincing bots, and the derisive term 'artificial stupidity' was soon used to describe them. Even that put-down grants too much to these programs, as they are not even stupid. An artificial system that behaved entirely like a stupid person would be hailed as the achievement of 'hard AI'.

How they do it

Until quite recently, chatbots have been based on the same simple concept that powered Eliza. They scan the natural language input for predefined words or phrases (triggers), match them with predefined responses and then output the responses to the human.

Current chatbots such as Alice, Alan, SmartChild and the many others, have become much more convincing and can handle a much wider range of topics. They are nevertheless mostly based on the same trigger-response idea, mere hard-wired reflex. All of the apparent intelligence, knowledge, style and personality that chatbots exhibit are hand-crafted by the botmaster or programmer. Their behavior is entirely hardwired at the response level, and there is no mechanism for any kind of actual intelligence, understanding, or autonomous development, despite the frequency with which their creators and the popular press describe them as 'artificial intelligence'.

For example, suppose you said "I'm a noder" to a bot. If the bot had registered "I'm a noder" as an input phrase, it might respond with "You noders suck" or "Hey, cool! I am, too!" or "Pleased to meet you, Mr. Greenjeans" or anything else. It depends entirely on what the botmaster has written into the fixed response to the key phrase.

This simple reflex sort of reflex behavior is now being extended in various ways to produce more flexible and convincing behavior that is less predictable and less obviously produced by a machine. Such techniques can involve sophisticated processing such as conditional branching, wildcards, memories, semantic relationships, reasoning, rudimentary [context sensitivity, machine parsing of natural language, and generative processes based on probabilities. Natural-seeming exchanges like the following are not difficult to achieve:
Bill (human): Do you like pie?
Bot: Yes, I do.
Bill: What kind do you like most?
Bot: My favorite is blueberry, served hot with vanilla ice cream. I remember that you prefer cherry pie, Bill.
Bill: Yes, that's right. You have a good memory.
Bot: You're too kind. My memory is improving, though.

Where they might be going

We have already taken a few trial and error steps down the path that started with the simple mechanical pattern matching and preformed responses of the early chatbots. That path, as humble and under-reaching as it seems, may well eventually lead to the holy grail of genuine 'hard' artificial intelligence: the full emulation/re-creation of the human mind (or at least a human zombie mind, to please the inveterate dualist). This is one bottom-up approach to the back-door entrance of the long-stagnant field of artificial intelligence.

The reason for this convergence of chatting and AI is that language is the single behavior that most comprehensively reflects the large set of competences that we call intelligence. This is not to equate human language with intelligence, but it is to say that passing as a fully competent human in natural language use is the best and simplest single test for the common-sense concept we call intelligence. That of course was what Alan Turing suggested.

Practical applications of natural language interaction with machines can fall greatly short of emulating a competent human mind and still succeed. Much utility can be provided by language bots that have even very limited language skills. Automation in call screening, customer service, access to a database or knowledgebase, 3D virtual-world game play, interactive literature, operation interfaces for computers and other mechanical systems, various aspects of education, and even sex, friendship and love are examples of the practical goals that motivate the vigorous research and hobbyism concerning chatbots.

But are they worth the trouble?

The practical point of a natural language interface is economy of time, effort and money through automation, but the biggest problem with the current level of chatbot technology is that the process of bot construction itself is extremely labor-intensive. Some bot development systems make it easy, fun and labor-intensive; others make it numbingly difficult and labor-intensive. Jabberwacky lets you create a bot just by talking to it; a Jabberwacky bot kind of absorbs your style and knowledge through conversation with you--lots and lots of conversation. BuddyScript, the system behind SmartChild, gives you great power to build a sophisticated bot, so you either learn yet another scripting language and spend a lot of time in development or you hire an expensive specialist. In the middle of the extremes are systems like Pandorabots (based on AIML, the Artificial Intelligence Markup Language in which the ALICE bots are defined) and the Personality Forge.

This practical barrier to development is very similar to the one that stunted the development of early expert systems and knowledgebases. It forced developers to take the easiest way out and focus on smaller and smaller 'domains' (restricted topics). They never really solved the problem, though. That same strategy is now being used for chatbots and, unfortunately, with the same lack of effect. It would be fair to say that, currently, sophisticated chatbots are only worth the effort of making them for high-ticket applications or hobbyist enjoyment.

Overcoming the problem

As chatbot developers worked to make their creations more and more convincing, they were forced to consider, different kinds of memory, initiative, situational understanding, and reasoning, some of the functions that have long been discussed as mental components in theories of mind and artificial intelligence. The chatbot designers and the theorists seem to be approaching the problem from completely different directions, although they may deny that they are even working on the same problem.

Chatbot design has been entirely bottom-up, having started from the simplest of possible behaviors (reflex), and is driven by competition and the promise of commercial rewards. A chatbot is built and tried, deficiencies are noted, and specific pragmatic solutions are contrived and tested. Successes are often shared.

The philosophers and traditional AI folks, on the other side, are nearly all top-down thinkers; they prefer to sketch out neat grand architectures and they spend a lot of time defining things. They are in no particular hurry and a 'my way is the only way' thinking is common. I believe that these two camps are actually working toward a common middle ground, and they are destined to meet at some point, much like tunneling teams working from opposite sides of a mountain.

To surmount the labor-intensive bot creation process, the bot designers will need to move toward self-developing systems, learning bots that build themselves by interacting directly with clients and reading documents or databases, ultimately, experiencing and interpreting multi-sensory data. That goes way beyond simple associational memories (John:favorite-food:pizza, etc.). They will need to embed some emotional functions as well as reasoning functions. They will have to create mechanisms that turn language into concepts and turn concepts into language by a generative process rather than canned responses, all driven by the bot's own purposes and motivations. The advanced mind-emulating bot will also have the ability to both learn from and influence the external world for purposes and sub-purposes of its own, purposes that may emerge automatically from basic motivations tempered by 'pain' and 'pleasure' rather than being implanted by a designer.

The theorists, on their part, need to break through the logjam of term definition by setting goals and test criteria that are based on behavior rather than abstract 'mental life' or 'what it's like to be' some conscious thing or other, the fog being generated by certain philosophers of mind. They need to move away from excessively abstract theories and hokey 'thought experiments' towards practical architectures that engineers can begin building with.

The various rapidly-developing neurosciences have much to contribute to both those working from the bottom-up and those working from the top-down. The functional architecture of the mammalian brain is being worked out in greater and greater detail. We know pretty well how individual neurons work, how groups of neurons work together, how the nervous system is coupled with the endocrine system, and how sensory input works. We've mapped out how damage to certain parts of the brain affects specific aspects of behavior and consciousness. Several decades ago, the brain was largely a theoretical black box and little of its functioning was known. Now, the neuroscientists have shrunk that black box considerably and continue to shrink it. The understanding produced by neuroscience is illuminating the concept of mind with bright light, and it can serve both as blueprints for bot development and guidelines for theorists, thus facilitating the convergence of theory and practice. Rest assured that there are chatbots in your future.

Log in or register to write something here or to contact authors.