The human ability
Ah, that indefinable quality of the human mind. Consciousness. Intelligence. Awareness. Call it what you will, define it how you will, one thing is sure: fully functional people have it and pocket calculators don't. And our intuition is that it is vitally important, that it is an essence of what makes us us.
The ancients believed that the heart was central. It is still in language and metaphor. But people think by virtue of one organ. The brain. And the brain thinks by virtue of?
In the body, the brain. And in the brain?
Let us first observe that any explanation of consciousness that involves a consciousness, a homunculus inside the system, has not actually explained anything, just moved the problem. That is, any real explanation of consciousness must explain it in terms of component parts that are not themselves conscious.1
The brain is made up of neurons. Each component neuron is not conscious, yet the brain is. Each neuron is a living cell, made up of atoms. Each component molecule is not alive, yet the cell is.
You have a choice here, and it's one of those speared-by-the-horns-of-a-dilemma type choices: either you accept that, contrary to intuition, life is an emergent property of atoms assembled into hierarchical interacting structures, with no other added ingredients, or you believe in some kind of mystical life force. Either you accept that consciousness is an emergent property of neurons assembled into hierarchical interacting structures, with no other added ingredients, or you believe in some kind of "think force".
That is, either you buy into the project that has served us so well these last few hundred years, based on the assumption that everything is in principle, explicable, reducible, or you should go back to your medieval hut, chant mumbo-jumbo and make the sign of the cross at supernatural inexplicable things that go bump in the night.
And if consciousness is just matter moving in patterns according to rules then in theory it could be done in silicon. Or, at a push, by a philosopher with an instruction book and a really big jotter pad.
The philosopher’s error
We should expect the real story of consciousness to be difficult and counterintuitive. If it were easy, we would have cracked it by now. 2
John Searle's Chinese room thought experiment is a bunch of hand-waving designed to try to convince us that we are not just matter. That is, that consciousness in the sense that a human being has it, cannot be an algorithmic process running upon inanimate hardware 3.
What surprises me is that so many people fall for it. This, I guess, is because it reinforces their prejudices. And because they don’t examine the alternative too closely. If consciousness needs to run upon some conscious hardware, then what makes this hardware conscious? Does it too have some special "consciousness" embedded in it? I'm afraid it looks like hand-waving all the way down.
Taking this intuitively appealing stance lays you open to the question: "So how do people do it? Are we not assembled by biology out of the basic building blocks of matter? What, if anything, makes us special?"
Back to that dilemma: If cunningly assembled matter can think, then Searle's Chinese room is bogus without even going into details, as the Strong AI postulate is true. If you hold that matter cannot in itself have a mind without some special essence, divine spark, soul, essence, call it what you will; then you are living in the dark ages.
Despite this, many still hold the belief that Searle's Chinese room thought experiment constitutes some kind of proof that what a mind does cannot be reduced to some kind of algorithmic process carried out by dumb unliving unconscious atoms.
The Chinese room as a proof that machines cannot be conscious is an example of The philosopher’s error, 4 that is "mistaking a failure of the imagination for an insight into necessity". This is a line of argument that proceeds "I can't see any way how x could be y, therefore x is not y’. Or, in this case "I can't see any consciousness here, therefore there can't be any consciousness here."
The Chinese room
In the thought experiment, we are asked to imagine a man who does not speak Chinese playing the role of the homunculus inside a computer, just following detailed instructions obediently, manipulating symbols that he himself does not understand, like a CPU.
The system that he works carries out a Chinese conversation. Paper with Chinese characters on it (which he, being a Westerner, does not understand) come in one slot. He looks up rules and correspondences, makes notes, and finally sends matching output out another slot. There is rote manipulation of symbols, but no understanding. All very much like the Von Neumann architecture. Or a neuron.
To the outside world, it seems as though there is an intelligent Chinese-speaker in the room. It's not the man, so who is the speaker? Who, if anyone, is it in there that understands Chinese?
The test begins ... now
The Turing test is absurdly easy to pass when the human is unsuspecting. Eliza's canned responses can do it for a while. But it is also fiendishly hard to pass when the human is a prepared, critical judge. How would you react if I asked you the same question five times in a row. Would you, like a pocket calculator, give the same answer five times over? What if I made up a word, defined it to you, and used it in the rest of the conversation and asked you to use it too? If you cannot remember what I have said to you, and even learn from it, you cannot pass the Turing test.
If I asked you about the latest soccer, would you recite some recent facts and give an opinion on them, or like me, would you give your reasons why you don't care? If I asked you about the politics, maybe you would take the other tack. If I told a joke, you would be able to explain what makes it funny, even if you didn’t find it funny?5
The man in the room just matches up output symbols to input symbols. If you were debating philosophy, to what extent would your responses "just match up" to what was said to you? But the room passes the Turing test, so we know that the program being hand-simulated is at very least orders of magnitude more complex than any chattterbot that we have already made, and has a vast amount of internal state. So how is it clear that this is utterly different from a brain?
The Chinese room experiment is an attempt to misdirect us, by asking us to imagine something far too simple to be workable, and then use this to dismiss more complex programs as unworkable too.
The canned reply
The rest of the argument I shall leave to Daniel Dennett's own words:
The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated and multilayered system, brimming with "world knowledge" and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, its own "motivations" and the motivations of its interlocutor, and much, much more. Searle does not deny that programs can have all this structure, he simply discourages us from attending to it. But if we do a good job imagining the case, we are not only entitled but obliged to imagine that the program Searle is hand-simulating has all this structure. But then it’s no longer obvious, I trust, that there is no genuine understanding of the joke going on. Maybe billions of actions of these structured parts produce genuine understanding after all. If your response to this hypotheses is that you haven't the faintest idea whether there could be genuine understanding in such a complex system, that is enough to show that Searle's thought experiment depends, illicitly, on your imagining too simple a case and drawing the "obvious" conclusion from it.
We see clearly that there is nothing like genuine understanding in any hunk of programming small enough to understand readily. Surely more of the same, no matter how much more, could never add up to genuine understanding. But if we are materialists who are convinced that one way or another brains are responsible on their own, without miraculous assistance, for understanding, we must admit that genuine understanding is somehow achieved by a process composed of interactions between a host of subsystems none of which understand a thing by themselves.
How might we try harder? With the help of some handy concepts: the intermediate-level software concepts that were designed by computer scientists to keep track of the otherwise unimaginable complexities in large systems.
All these entities are organised into a huge system, the activities of which organise themselves around it’s own centre of narrative gravity. Searle, labouring in the Chinese room, does not understand Chinese, but he is not alone in the room. There is also the System, and it is to that self that we should attribute any understanding of the joke.
1) Daniel Dennett: Consciousness Explained
2) Ibid.
3) Perhaps you think that the way that the mind works is not supernatural, yet it cannot be represented as an algorithm. So what is it then? Perhaps your definition of algorithm is then too narrow. An algorithm is not necessarily deterministic. The hill climbing heuristic, which uses a random walk, is an algorithm. An algorithm could be adaptive or self-modifying. The concept behind genetic algorithms is itself an algorithm. If a computer CPU, which surely can do no other than algorithmic processes, is in theory capable of simulating atoms so exactly that simulated chemical reactions can take place, then is this dance of atoms not viewable as an algorithm? And given sufficient (large but by no means infinite) processor power, could not the atoms be those of a brain?
4) Daniel Dennett: Consciousness Explained
5) Ibid.