The ELIZA effect refers to a specific form of personification: how people tend to attribute chatbots and other artificial systems with human qualities such as intelligence, intention, and emotions. The name comes from the ELIZA program developed by MIT Professor Joseph Weizenbaum and the effect it had on many of the people who interacted with the program between 1964 and 1966.

Weizenbaum was a computer scientist and he developed ELIZA as a platform for studying natural language communication between computers and humans. He was very surprised at how people readily tended to think the program was showing intelligence, intention, and genuine interest in their responses. They tended to really open up about their personal thoughts, feelings and life situation to the program, even when they knew it was not a real person. Some were even reluctant to believe that it was really just a computer program. There was a lot of buzz about ELIZA passing the Turing test.

Weizenbaum was not at all happy about the ELIZA effect:

I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

The paper he published on ELIZA in 19661 was intended in large part to dispel the effect by explaining how the program worked. In the introduction, he wrote:

... once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it is revealed as a mere collection of procedures, each quite comprehensible. ... The object of this paper is to cause just such a reevaluation of ELIZA. Few programs ever needed it more.

Douglas Hofstadter was similarly down on the ELIZA effect:

The most superficial of syntactic tricks convinced some people ... that the program understood everything they were saying, sympathized with them, and even empathized with them.2

Doug goes on to say that despite great volumes of discussion to lay bare the soulessness of such emulation programs,

... the susceptibility remains. Like a tenacious virus that constantly mutates, the Eliza effect seems to crop up over and over again in AI ...

But is the effect really as insidious as some anti-reductionist thinkers suggest? What is it about our psychology that produces the Eliza effect? Is there something deep and important about it that would help us understand our attitudes and behavior vis-à-vis increasingly humanoid machines?

To begin with, the effect was especially strong for the ELIZA program because of its particular design. Weizenbaum said that he did not design the program to engage people emotionally, but the design of the DOCTOR script for the program was specifically good at doing just that. The scenario of a psychotherapist eliciting thoughts and feelings from a patient was chosen to make it easy on the script writer. ELIZA could handle the sessions with a small range of prompts and did not need to respond to questions as in a normal conversation. The situation also did not tax the program with a need for memory or knowledge of the world. This particular combination of limited conversational situation and emphasis on the human's life and responses enabled ELIZA to play its role without breaking the human's belief. The ELIZA effect proved to be much weaker for later attempts at chatbot design that try to be more practically general and involve fewer situational constraints.

Still, evidence of our willingness to accept anything that looks the look, walks the walk, and talks the talk as our spiritual and intelligent peers (or superiors) is found consistently throughout our literature from the times when we first started recording how people think, feel, and behave. If we wanted to stretch and reach really deep, we could see the Eliza effect as growing out of animism, the pre-ancient inclination of people to assign spirit or intention to inanimate objects such as peculiar stones, or the wind, or other aspects of the natural environment early humans struggled to understand and deal with. We could argue, as Weizenbaum has, that such beliefs should evaporate as ignorance is replaced by the truer understanding offered by science, but it doesn't seem to work that way.

At least some of us exhibit some need or strong will to believe, which at minimum creates a tendency to give the benefit of the doubt and feel glee when our belief is reinforced and to conveniently overlook when it is not. We tend to keep our initial belief or assumption until something breaks it, and the belief brings into play a gestalt-like filling in of what we expect but don't see and a selective inattention to minor inconsistencies.

The reality observed in actual interactions with chatbots is that belief is quickly and easily broken when the bot begins to produce obviously 'canned' replies, violates grammar and usage in an inconsistent way, or fails to exhibit basic knowledge of the world ('common sense') or the present context of the conversation. The inability to project any kind of character or personality is another belief breaker. When the belief is broken, attitudes toward the bot tend to turn negative and people actively look for faults and failures.

Most of our lives is spent dealing with other people. We easily accept each other as persons because we know that we are essentially built the same way, physically, and our belief is not broken as we engage in social interaction (with extremely rare exceptions, perhaps). Robots clearly have different physical constitutions, but what if they could interact with us in fully the same ways as we do with other humans, both intellectually and emotionally?

Consider a time when machines such as we've seen in films like Bicentennial Man, A.I, and I Robot are actually among us in our lives, what then? What will really determine how people react to machines as persons (or how machines react to people as persons)? You can argue with little resistance that this scenario is way beyond the ELIZA effect, but I think the basic willingness to believe until something breaks the belief is the key factor. If a machine can convince us that it thinks and feels in much the same ways we do, even after extended interaction with no artificial restrictions, we will probably regard it as a person and interact with it as such.

 


1 ELIZA-- A Computer Program For the Study of Natural Language Communication Between Man And Machine (1966)

2 Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought (1995)