The Chinese Room Argument Essay Format

Chinese Room Argument

The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)---that is, to claims that computers do or at least can (someday might) think. According to Searle's original presentation, the argument is based on two key claims: brains cause minds and syntax doesn't suffice for semantics. Its target is what Searle dubs "strong AI." According to strong AI, Searle says, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states" (1980a, p. 417). Searle contrasts strong AI with "weak AI." According to weak AI, computers just simulate thought, their seeming understanding isn't real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things).

Table of Contents

  1. The Chinese Room Thought Experiment
  2. Replies and Rejoinders
    1. The Systems Reply
    2. The Robot Reply
    3. The Brain Simulator Reply
    4. The Combination Reply
    5. The Other Minds Reply
    6. The Many Mansions Reply
  3. Searle's "Derivation from Axioms"
  4. Continuing Dispute
    1. Initial Objections & Replies
    2. The Connectionist Reply
  5. Summary Analysis
  6. Postscript
  7. References and Further Reading

1. The Chinese Room Thought Experiment

Against "strong AI," Searle (1980a) asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch." The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'": you yourself know none of this. Nevertheless, you "get so good at following the instructions" that"from the point of view of someone outside the room" your responses are "absolutely indistinguishable from those of Chinese speakers." Just by looking at your answers, nobody can tell you "don't speak a word of Chinese." Producing answers "by manipulating uninterpreted formal symbols," it seems "[a]s far as the Chinese is concerned," you "simply behave like a computer"; specifically, like a computer running Schank and Abelson's (1977) "Script Applier Mechanism" story understanding program (SAM), which Searle's takes for his example.

But in imagining himself to be the person in the room, Searle thinks it's "quite obvious . . . I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing." "For the same reasons," Searle concludes, "Schank's computer understands nothing of any stories" since "the computer has nothing more than I have in the case where I understand nothing" (1980a, p. 418). Furthermore, since in the thought experiment "nothing . . . depends on the details of Schank's programs," the same "would apply to any [computer] simulation" of any"human mental phenomenon" (1980a, p. 417); that's all it would be, simulation. Contrary to "strong AI," then, no matter how intelligent-seeming a computer behaves and no matter what programming makes it behave that way, since the symbols it processes are meaningless (lack semantics) to it, it's not really intelligent. It's not actually thinking. Its internal states and processes, being purely syntactic, lack semantics (meaning); so, it doesn't really have intentional (that is, meaningful) mental states.

2. Replies and Rejoinders

Having laid out the example and drawn the aforesaid conclusion, Searle considers several replies offered when he "had the occasion to present this example to a number of workers in artificial intelligence" (1980a, p. 419). Searle offers rejoinders to these various replies.

a. The Systems Reply

The Systems Reply suggests that the Chinese room example encourages us to focus on the wrong agent: the thought experiment encourages us to mistake the would-be subject-possessed-of-mental-states for the person in the room. The systems reply grants that "the individual who is locked in the room does not understand the story" but maintains that "he is merely part of a whole system, and the system does understand the story" (1980a, p. 419: my emphases).

Searle's main rejoinder to this is to "let the individual internalize all . . . of the system" by memorizing the rules and script and doing the lookups and other operations in their head. "All the same," Searle maintains, "he understands nothing of the Chinese, and . . . neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand then there is no way the system could understand because the system is just part of him" (1980a, p. 420). Searle also insists the systems reply would have the absurd consequence that "mind is everywhere." For instance, "there is a level of description at which my stomach does information processing" there being "nothing to prevent [describers] from treating the input and output of my digestive organs as information if they so desire." Besides, Searle contends, it's just ridiculous to say "that while [the] person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might" (1980a, p. 420).

b. The Robot Reply

The Robot Reply - along lines favored by contemporary causal theories of reference - suggests what prevents the person in the Chinese room from attaching meanings to (and thus presents them from understanding) the Chinese ciphers is the sensory-motoric disconnection of the ciphers from the realities they are supposed to represent: to promote the "symbol" manipulation to genuine understanding, according to this causal-theoretic line of thought, the manipulation needs to be grounded in the outside world via the agent's causal relations to the things to which the ciphers, as symbols, apply. If we "put a computer inside a robot" so as to "operate the robot in such a way that the robot does something very much like perceiving, walking, moving about," however, then the "robot would," according to this line of thought, "unlike Schank's computer, have genuine understanding and other mental states" (1980a, p. 420).

Against the Robot Reply Searle maintains "the same experiment applies" with only slight modification. Put the room, with Searle in it, inside the robot; imagine "some of the Chinese symbols come from a television camera attached to the robot" and that "other Chinese symbols that [Searle is] giving out serve to make the motors inside the robot move the robot's legs or arms." Still, Searle asserts, "I don't understand anything except the rules for symbol manipulation." He explains, "by instantiating the program I have no [mental] states of the relevant [meaningful, or intentional] type. All I do is follow formal instructions about manipulating formal symbols." Searle also charges that the robot reply "tacitly concedes that cognition is not solely a matter of formal symbol manipulation" after all, as "strong AI" supposes, since it "adds a set of causal relation[s] to the outside world" (1980a, p. 420).

c. The Brain Simulator Reply

The Brain Simulator Reply asks us to imagine that the program implemented by the computer (or the person in the room) "doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them." Surely then "we would have to say that the machine understood the stories"; or else we would "also have to deny that native Chinese speakers understood the stories" since "[a]t the level of the synapses" there would be no difference between "the program of the computer and the program of the Chinese brain" (1980a, p. 420).

Against this, Searle insists, "even getting this close to the operation of the brain is still not sufficient to produce understanding" as may be seen from the following variation on the Chinese room scenario. Instead of shuffling symbols, we "have the man operate an elaborate set of water pipes with valves connecting them." Given some Chinese symbols as input, the program now tells the man "which valves he has to turn off and on. Each water connection corresponds to synapse in the Chinese brain, and the whole system is rigged so that after . . . turning on all the right faucets, the Chinese answer pops out at the output end of the series of pipes." Yet, Searle thinks, obviously, "the man certainly doesn't understand Chinese, and neither do the water pipes." "The problem with the brain simulator," as Searle diagnoses it, is that it simulates "only the formal structure of the sequence of neuron firings": the insufficiency of this formal structure for producing meaning and mental states "is shown by the water pipe example" (1980a, p. 421).

d. The Combination Reply

The Combination Reply supposes all of the above: a computer lodged in a robot running a brain simulation program, considered as a unified system. Surely, now, "we would have to ascribe intentionality to the system" (1980a, p. 421).

Searle responds, in effect, that since none of these replies, taken alone, has any tendency to overthrow his thought experimental result, neither do all of them taken together: zero times three is naught. Though it would be "rational and indeed irresistible," he concedes, "to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it" the acceptance would be simply based on the assumption that "if the robot looks and behaves sufficiently like us then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior." However, "[i]f we knew independently how to account for its behavior without such assumptions," as with computers, "we would not attribute intentionality to it, especially if we knew it had a formal program" (1980a, p. 421).

e. The Other Minds Reply

The Other Minds Reply reminds us that how we "know other people understand Chinese or anything else" is "by their behavior." Consequently, "if the computer can pass the behavioral tests as well" as a person, then "if you are going to attribute cognition to other people you must in principle also attribute it to computers" (1980a, p. 421).

Searle responds that this misses the point: it's "not. . . how I know that other people have cognitive states, but rather what it is that I am attributing when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state" (1980a, p. 420-421: my emphases).

f. The Many Mansions Reply

The Many Mansions Reply suggests that even if Searle is right in his suggestion that programming cannot suffice to cause computers to have intentionality and cognitive states, other means besides programming might be devised such that computers may be imbued with whatever does suffice for intentionality by these other means.

This too, Searle says, misses the point: it "trivializes the project of Strong AI by redefining it as whatever artificially produces and explains cognition" abandoning "the original claim made on behalf of artificial intelligence" that "mental processes are computational processes over formally defined elements." If AI is not identified with that "precise, well defined thesis," Searle says, "my objections no longer apply because there is no longer a testable hypothesis for them to apply to" (1980a, p. 422).

3. Searle's "Derivation from Axioms"

Besides the Chinese room thought experiment, Searle's more recent presentations of the Chinese room argument feature - with minor variations of wording and in the ordering of the premises - a formal "derivation from axioms" (1989, p. 701). The derivation, according to Searle's 1990 formulation proceeds from the following three axioms (1990, p. 27):

(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.

to the conclusion:

(C1) Programs are neither constitutive of nor sufficient for minds.

Searle then adds a fourth axiom (p. 29):

(A4) Brains cause minds.

from which we are supposed to "immediately derive, trivially" the conclusion:

(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.

whence we are supposed to derive the further conclusions:

(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.
(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.

On the usual understanding, the Chinese room experiment subserves this derivation by "shoring up axiom 3" (Churchland & Churchland 1990, p. 34).

4. Continuing Dispute

To call the Chinese room controversial would be an understatement. Beginning with objections published along with Searle's original (1980a) presentation, opinions have drastically divided, not only about whether the Chinese room argument is cogent; but, among those who think it is, as to why it is; and, among those who think it is not, as to why not. This discussion includes several noteworthy threads.

a. Initial Objections & Replies

Initial Objections & Replies to the Chinese room argument besides filing new briefs on behalf of many of the forenamed replies(for example, Fodor 1980 on behalf of "the Robot Reply") take, notably, two tacks. One tack, taken by Daniel Dennett (1980), among others, decries the dualistic tendencies discernible, for instance, in Searle's methodological maxim "always insist on the first-person point of view" (Searle 1980b, p. 451). Another tack notices that the symbols Searle-in-the-room processes are not meaningless ciphers, they're Chinese inscriptions. So they are meaningful; and so is Searle's processing of them in the room; whether he knows it or not.

In reply to this second sort of objection, Searle insists that what's at issue here is intrinsic intentionality in contrast to the merely derived intentionality of inscriptions and other linguistic signs. Whatever meaning Searle-in-the-room's computation might derive from the meaning of the Chinese symbols which he processes will not be intrinsic to the process or the processor but "observer relative," existing only in the minds of beholders such as the native Chinese speakers outside the room. "Observer-relative ascriptions of intentionality are always dependent on the intrinsic intentionality of the observers" (Searle 1980b, pp. 451-452). The nub of the experiment, according to Searle's attempted clarification, then, is this: "instantiating a program could not be constitutive of intentionality, because it would be possible for an agent [e.g., Searle-in-the-room] to instantiate the program and still not have the right kind of intentionality" (Searle 1980b, pp. 450-451: my emphasis); the intrinsic kind. Though Searle unapologetically identifies intrinsic intentionality with conscious intentionality, still he resists Dennett's and others' imputations of dualism. Given that what it is we're attributing in attributing mental states is conscious intentionality, Searle maintains, insistence on the "first-person point of view" is warranted; because "the ontology of the mind is a first-person ontology": "the mind consists of qualia [subjective conscious experiences] . . . right down to the ground" (1992, p. 20). This thesis of Ontological Subjectivity, as Searle calls it in more recent work, is not, he insists, some dualistic invocation of discredited "Cartesian apparatus" (Searle 1992, p. xii), as his critics charge; it simply reaffirms commonsensical intuitions that behavioristic views and their functionalistic progeny have, for too long, highhandedly, dismissed. This commonsense identification of thought with consciousness, Searle maintains, is readily reconcilable with thoroughgoing physicalism when we conceive of consciousness as both caused by and realized in underlying brain processes. Identification of thought with consciousness along these lines, Searle insists, is not dualism; it might more aptly be styled monist interactionism (1980b, p. 455-456) or (as he now prefers) "biological naturalism" (1992, p. 1).

b. The Connectionist Reply

The Connectionist Reply (as it might be called) is set forth---along with a recapitulation of the Chinese room argument and a rejoinder by Searle---by Paul and Patricia Churchland in a 1990 Scientific American piece. The Churchlands criticize the crucial third "axiom" of Searle's "derivation" by attacking his would-be supporting thought experimental result. This putative result, they contend, gets much if not all of its plausibility from the lack of neurophysiological verisimilitude in the thought-experimental setup. Instead of imagining Searle working alone with his pad of paper and lookup table, like the Central Processing Unit of a serial architecture machine, the Churchlands invite us to imagine a more brainlike connectionist architecture. Imagine Searle-in-the-room, then, to be just one of very many agents, all working in parallel, each doing their own small bit of processing (like the many neurons of the brain). Since Searle-in-the-room, in this revised scenario, does only a very small portion of the total computational job of generating sensible Chinese replies in response to Chinese input, naturally he himself does not comprehend the whole process; so we should hardly expect him to grasp or to be conscious of the meanings of the communications he is involved, in such a minor way, in processing.

Searle counters that this Connectionist Reply---incorporating, as it does, elements of both systems and brain-simulator replies---can, like these predecessors, be decisively defeated by appropriately tweaking the thought-experimental scenario. Imagine, if you will, a Chinese gymnasium, with many monolingual English speakers working in parallel, producing output indistinguishable from that of native Chinese speakers: each follows their own (more limited) set of instructions in English. Still, Searle insists, obviously, none of these individuals understands; and neither does the whole company of them collectively. It's intuitively utterly obvious, Searle maintains, that no one and nothing in the revised "Chinese gym" experiment understands a word of Chinese either individually or collectively. Both individually and collectively, nothing is being done in the Chinese gym except meaningless syntactic manipulations from which intentionality and consequently meaningful thought could not conceivably arise.

5. Summary Analysis

Searle's Chinese Room experiment parodies the Turing test, a test for artificial intelligence proposed by Alan Turing (1950) and echoing René Descartes' suggested means for distinguishing thinking souls from unthinking automata. Since "it is not conceivable," Descartes says, that a machine "should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as even the dullest of men can do" (1637, Part V), whatever has such ability evidently thinks. Turing embodies this conversation criterion in a would-be experimental test of machine intelligence; in effect, a "blind" interview. Not knowing which is which, a human interviewer addresses questions, on the one hand, to a computer, and, on the other, to a human being. If, after a decent interval, the questioner is unable to tell which interviewee is the computer on the basis of their answers, then, Turing concludes, we would be well warranted in concluding that the computer, like the person, actually thinks. Restricting himself to the epistemological claim that under the envisaged circumstances attribution of thought to the computer is warranted, Turing himself hazards no metaphysical guesses as to what thought is - proposing no definition or no conjecture as to the essential nature thereof. Nevertheless, his would-be experimental apparatus can be used to characterize the main competing metaphysical hypotheses here in terms their answers to the question of what else or what instead, if anything, is required to guarantee that intelligent-seeming behavior really is intelligent or evinces thought. Roughly speaking, we have four sorts of hypotheses here on offer. Behavioristic hypotheses deny that anything besides acting intelligent is required. Dualistic hypotheses hold that, besides (or instead of) intelligent-seeming behavior, thought requires having the right subjective conscious experiences. Identity theoretic hypotheses hold it to be essential that the intelligent-seeming performances proceed from the right underlying neurophysiological states. Functionalistic hypotheses hold that the intelligent-seeming behavior must be produced by the right procedures or computations.

The Chinese experiment, then, can be seen to take aim at Behaviorism and Functionalism as a would-be counterexample to both. Searle-in-the-room behaves as if he understands Chinese; yet doesn't understand: so, contrary to Behaviorism, acting (as-if) intelligent does not suffice for being so; something else is required. But, contrary to Functionalism this something else is not - or at least, not just - a matter of by what underlying procedures (or programming) the intelligent-seeming behavior is brought about: Searle-in-the-room, according to the thought-experiment, may be implementing whatever program you please, yet still be lacking the mental state (e.g., understanding Chinese) that his behavior would seem to evidence. Thus, Searle claims, Behaviorism and Functionalism are utterly refuted by this experiment; leaving dualistic and identity theoretic hypotheses in control of the field. Searle's own hypothesis of Biological Naturalism may be characterized sympathetically as an attempt to wed - or unsympathetically as an attempt to waffle between - the remaining dualistic and identity-theoretic alternatives.

6. Postscript

Debate over the Chinese room thought experiment - while generating considerable heat - has proven inconclusive. To the Chinese room's champions - as to Searle himself - the experiment and allied argument have often seemed so obviously cogent and decisively victorious that doubts professed by naysayers have seemed discreditable and disingenuous attempts to salvage "strong AI" at all costs. To the argument's detractors, on the other hand, the Chinese room has seemed more like "religious diatribe against AI, masquerading as a serious scientific argument" (Hofstadter 1980, p. 433) than a serious objection. Though I am with the masquerade party, a full dress criticism is, perhaps, out of place here (see Hauser 1993 and Hauser 1997). I offer, instead, the following (hopefully, not too tendentious) observations about the Chinese room and its neighborhood.

(1) Though Searle himself has consistently (since 1984) fronted the formal "derivation from axioms," general discussion continues to focus mainly on Searle's striking thought experiment. This is unfortunate, I think. Since intuitions about the experiment seem irremediably at loggerheads, perhaps closer attention to the derivation could shed some light on vagaries of the argument (see Hauser 1997).

(2) The Chinese room experiment, as Searle himself notices, is akin to "arbitrary realization" scenarios of the sort suggested first, perhaps, by Joseph Weizenbaum (1976, Ch. 2), who "shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones" (Searle 1980a, p. 423). Such scenarios are also marshaled against Functionalism (and Behaviorism en passant) by others, perhaps most famously, by Ned Block (1978). Arbitrary realizations imagine would-be AI-programs to be implemented in outlandish ways: collective implementations (e.g., by the population of China coordinating their efforts via two-way radio communications), imagine programs implemented by groups; Rube Goldberg implementations (e.g., Searle's water pipes or Weizenbaum's toilet paper roll and stones), imagine programs implemented bizarrely, in "the wrong stuff." Such scenarios aim to provoke intuitions that no such thing - no such collective or no such ridiculous contraption - could possibly be possessed of mental states. This, together with the premise - generally conceded by Functionalists - that programs might well be so implemented, yields the conclusion that computation, the "right programming" does not suffice for thought; the programming must be implemented in "the right stuff." Searle concludes similarly that what the Chinese room experiment shows is that "[w]hat matters about brain operations is not the formal shadow cast by the sequences of synapses but rather the actual properties of the synapses" (1980, p. 422), their "specific biochemistry" (1980, p. 424).

(3) Among those sympathetic to the Chinese room, it is mainly its negative claims - not Searle's positive doctrine - that garner assent. The positive doctrine - "biological naturalism," is either confused (waffling between identity theory and dualism) or else it just is identity theory or dualism.

(4) Since Searle argues against identity theory, on independent grounds, elsewhere (e.g., 1992, Ch. 5); and since he acknowledges the possibility that some "specific biochemistry" different than ours might suffice to produce conscious experiences and consequently intentionality (in Martians, say), and speaks unabashedly of "ontological subjectivity" (see, e.g., Searle 1992, p. 100); it seems most natural to construe Searle's positive doctrine as basically dualistic, specifically as a species of "property dualism" such as Thomas Nagel (1974, 1986) and Frank Jackson (1982) espouse. Nevertheless, Searle frequently and vigorously protests that he is not any sort of dualist. Perhaps he protests too much.

(5) If Searle's positive views are basically dualistic - as many believe - then the usual objections to dualism apply, other-minds troubles among them; so, the "other-minds" reply can hardly be said to "miss the point." Indeed, since the question of whether computers (can) think just is an other-minds question, if other minds questions "miss the point" it's hard to see how the Chinese room speaks to the issue of whether computers really (can) think at all.

(6) Confusion on the preceding point is fueled by Searle's seemingly equivocal use of the phrase "strong AI" to mean, on the one hand, computers really do think, and on the other hand, thought is essentially just computation. Even if thought is not essentially just computation, computers (even present-day ones), nevertheless, might really think. That their behavior seems to evince thought is why there is a problem about AI in the first place; and if Searle's argument merely discountenances theoretic or metaphysical identification of thought with computation, the behavioral evidence - and consequently Turing's point - remains unscathed. Since computers seem, on the face of things, to think, the conclusion that the essential nonidentity of thought with computation would seem to warrant is that whatever else thought essentially is, computers have this too; not, as Searle maintains, that computers' seeming thought-like performances are bogus. Alternately put, equivocation on "Strong AI" invalidates the would-be dilemma that Searle's intitial contrast of "Strong AI" to "Weak AI" seems to pose:

Strong AI (they really do think) or Weak AI (it's just simulation).
Not Strong AI (by the Chinese room argument).
Therefore, Weak AI.

To show that thought is not just computation (what the Chinese room -- if it shows anything -- shows) is not to show that computers' intelligent seeming performances are not real thought (as the "strong" "weak" dichotomy suggests) .

7. References and Further Reading

  • Block, Ned. 1978. "Troubles with Functionalism." In C. W. Savage, ed., Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, Vol. 9, 261-325. Minneapolis: University of Minnesota Press.
  • Churchland, Paul, and Patricia Smith Churchland. 1990. "Could a machine think?" Scientific American 262(1, January): 32-39.
  • Dennett, Daniel. 1980. "The milk of human intentionality." Behavioral and Brain Sciences 3: 429-430.
  • Descartes, René. 1637. Discourse on method. Trans. John Cottingham, Robert Stoothoff and Dugald Murdoch. In The philosophical writings of Descartes, Vol. I, 109-151. New York: Cambridge University Press.
  • Fodor, Jerry. 1980. "Searle on what only brains can do." Behavioral and Brain Sciences 3: 431-432.
  • Hauser, Larry. 1993. Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence. East Lansing, Michigan: Michigan State University (Doctoral Dissertation). URL = http://members.aol.com/wutsamada/disserta.html.
  • Hauser, Larry. 1997. "Searle's Chinese Box: Debunking the Chinese Room Argument." Minds and Machines, Volume 7, Number 2, pp. 199-226. URL = http://members.aol.com/lshauser/chiboxab.html.
  • Jackson, Frank. 1982. "Epiphenomenal qualia." Philosophical Quarterly 32:127-136.
  • Nagel, Thomas. 1974. What is it like to be a bat? Philosophical Review 83:435-450.
  • Nagel, Thomas. 1986. The View from Nowhere. Oxford: Oxford University Press.
  • Schank, Roger C., and Robert P. Abelson. 1977. Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Lawrence Erlbaum Press.
  • Searle, John. 1980a. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3, 417-424.
  • Searle, John. 1980b. "Intrinsic Intentionality." Behavioral and Brain Sciences 3: 450-456.
  • Searle, John. 1984. Minds, Brains, and Science. Cambridge: Harvard University Press.
  • Searle, John. 1989. "Reply to Jacquette." Philosophy and Phenomenological Research XLIX: 701-708.
  • Searle, John. 1990. "Is the Brain's Mind a Computer Program?" Scientific American 262: 26-31.
  • Searle, John. 1992. The Rediscovery of the Mind, Cambridge, MA: MIT Press.
  • Turing, Alan. 1950. "Computing Machinery and Intelligence." Mind LIX: 433-460.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason. San Francisco: W. H. Freeman.

Author Information

Larry Hauser
Email: hauser@alma.edu
Alma College
U. S. A.

This text deals with arguments against the possibility of so-called strong artificial intelligence, with a particular focus on the Chinese Room Argument devised by philosopher John Searle. We start with a description of the thesis that Searle wants to disprove. Then we describe Searle’s arguments. Subsequently, we take a look at some objections to Searle by other influential philosophers. Finally, I conclude with my own objection that introduces a more accurate definition of strong artificial intelligence which renders Searle’s arguments invalid. Along the way, we will dispose of some common misconceptions about artificial intelligence.

Searle’s Argument

The Semantic Argument

In his essay Can Computers Think? [11], Searle gives his own definition of strong artificial intelligence, which he subsequently tries to refute. His definition is as follows:

One could summarise this view […] by saying that the mind is to the brain, as the program is to the computer hardware.

Searle’s first attempt at refuting the possibility of strong artificial intelligence is based on the insight that mental states have, by definition, a certain semantic content or meaning. Programs, on the other hand, are purely formal and syntactical, i.e. a sequence of symbols that do not have a meaning in themselves. Therefore, a program could not be equivalent to a mind. A formal reconstruction of this argument looks as follows:

  • Syntax is not sufficient for semantics
  • Programs are completely characterized by their formal, syntactical structure
  • Human minds have semantic contents
  • Therefore, programs are not sufficient for creating a mind

Searle emphasizes the fact that his argument is based solely on the property that programs are defined formally, regardless of which physical system is used to run the program. Therefore, it does not state that it is impossible for us today to create a strong artificial intelligence, but that this is generally impossible for any conceivable machine in the future, regardless of how fast it is or which other properties it might have.

The Chinese Room Argument

In order to make his first premise more plausible (“Syntax is not sufficient for semantics”), Searle describes a thought experiment – the Chinese Room. Assume there were a program that is capable of answering Chinese questions in Chinese. No matter which question you pose in Chinese, it gives you an appropriate answer that a human Chinese speaker might also give. Searle now tries to argue that a computer running this program doesn’t actually understand Chinese in the same sense as a Chinese human being understands Chinese.

To this end, he assumes that the formal instructions of the program are carried out by a person who does not understand Chinese. This person is locked in a room, and the Chinese questions are passed into the room as a sequence of symbols. The room contains baskets with many other Chinese symbols, along with a list of formal instructions, which are purely syntactical rules that tell the person how to produce an answer to the question by assembling the symbols from the baskets. The answer generated by these instructions are then passed out of the room by the person. The person is not aware that the symbols that are passed into the room are questions and the symbols that are passed out of the room are answers to these questions. He just blindly carries out the instructions strictly and correctly. And these instructions generate meaningful Chinese sentences that are answers to the questions which couldn’t be distinguished from the answers a real Chinese speaking person would give.

Now Searle raises attention to the fact that the person in the room doesn’t understand Chinese simply by following formal instructions for generating answers. He continues to argue that a computer running a program that generates Chinese answers to Chinese questions therefore also doesn’t understand Chinese. Since this experiment could be generalized to arbitrary tasks, Searle concludes that computers are inherently incapable of understanding something.

Replies to the Chinese Room Argument

There are numerous objections to the Chinese Room argument by various authors. Many of these arguments are similar in nature. In the following, I will present the most commonly presented ones, including answers to these objections by Searle himself.

The Systems Reply

One of the most commonly raised objection is that even though the person in the Chinese Room does not understand Chinese, the system as whole does – the room with all its constituents, including the person. This objection is often called the Systems Reply and there are various versions of it.

For example, artificial intelligence researcher, entrepreneur and author Ray Kurzweil says in [5] that the person is only an executive unit and that its properties are not to be confused with the properties of the system. If one looks at the room as an overall system, the fact that the person does not understand Chinese doesn’t entail that this also holds for the room.

Cognitive scientist Margaret Boden argues in [1] that the human brain is not the carrier of intelligence, but rather that it causes intelligence. Analogously, the person in the room causes an understanding of Chinese to arise, even though it does not understand Chinese itself.

Searle responds to the Systems Reply with the semantic argument: Even the system as a whole couldn’t go from syntax to semantics and, hence, couldn’t understand the meaning of the Chinese symbols. In [9], he adds that the person in the room could theoretically memorize all the formal rules and perform all the computations in its head. Then, he argues, the person is the entire system, could answer Chinese questions without help and perhaps even lead Chinese conversations, but still wouldn’t understand Chinese since it only carries out formal rules and can’t associate a meaning with the formal symbols.

The Virtual Mind Reply

Similar to the Systems Reply, the Virtual Mind Reply states that the person does not understand Chinese, but that a running system could create new entities that differ from both the person and the system as a whole. The understanding of Chinese could be a new entity of this sort. This standpoint is argued for by artificial intelligence researcher Marvin Minsky in [15] and philosopher Tim Maudlin in [6]. Maudlin notes that Searle didn’t provide an adequate answer to this reply thus far.

The Robot Reply

Another reply changes the thought experiment in such a way that the program is put into a robot that can perceive the world through sensors (like cameras or microphones) and interact with the world via effectors (like motors or loudspeakers). This causal interaction with the environment, the argument goes, is a guarantee that the robot understands Chinese, since the formal symbols are endowed with semantics this way – namely objects in the real world. This view presupposes an externalist semantics. This reply is raised, for example, by Margaret Boden in [1].

Searle answers to this argument in [17] with the semantic argument: The robot still only has a computer as its brain and couldn’t go from syntax to semantics. He makes this more plausible by adapting the thought experiment such that the Chinese Room itself is integrated into a robot as its central processing unit. The Chinese symbols would then be generated by sensors and passed into the room. Analogously, the symbols passed out of the room would control the effectors. Even though the robot interacts with the external world this way, the person in the room still doesn’t understand the meaning of the symbols.

The Brain Simulator Reply

Some authors, e.g. philosophers Patricia and Paul Churchland in [2], suggest that one should imagine that instead of manipulating the Chinese symbols, a computer should simulate the neuronal firings in the brain of a Chinese person. Since the computer operates in exactly the same way as a brain, the argument goes, it must understand Chinese.

Searle answers to this argument in [10]. He argues that one could also simulate the neuronal structures by a system of water pipes and valves and put it into the Chinese Room. The person in the room then has instructions on how to guide the water through the pipes in order to simulate the brain of a Chinese person. Still, he says, no understanding of Chinese is generated.

The Emergence Reply

Now I present my own reply, which I have coined the Emergence Reply.

I grant that Searle’s arguments prove that a mind can not be equated with a computer program. This is immediately obvious from the semantic argument: Since a mind has properties that a program does not have (namely semantic content), a program can not be equal to a mind. Hence, it refutes the possibility of strong artificial intelligence by his own definition.

However, one can phrase another definition of strong artificial intelligence which, as I will argue, is not affected by Searle’s arguments:

A system exhibits strong artificial intelligence if it can create a mind as an emergent phenomenon by running a program.

I explicitly include any type of system, regardless of the material from which it is made – be it a computer, a Chinese Room or a gigantic hall of falling dominos or beer cans that simulate a Turing machine.

I will not try to argue for the possibility of strong artificial intelligence according to this definition. It is doubtful whether this is even possible. However, I will argue why this definition is not affected by Searle’s arguments.

Non-Applicability of the Semantic Argument

In my proposed definition, no analogy between the program and the mind created by the program is demanded. Therefore, the semantic argument becomes obsolete: Even though a program as a syntactical construct doesn’t create semantics (and therefore couldn’t be equal to a mind), it doesn’t follow that a program can’t create semantic contents in the course of its execution.

Moreover, this definition doesn’t state that the computer hardware is the carrier of the mental processes. The hardware is not enabled to think this way. Rather, the computer creates the mental processes as an emergent phenomenon, similarly to how the brain creates mental processes as an emergent phenomenon. So, if one considers the question in the title of Searle’s original essay “Can Computers Think?”, the answer would be “No, but they might create thinking.”

How a mind can be created through the execution of a program, and what sort of ontological existence this mind would have, is a discussion topic of its own. In order to make this more plausible, imagine a program that exactly simulates the trajectories and interactions of elementary particles in a brain of a Chinese speaker. This way, the program does not only create the same outputs for the same inputs as the Chinese’s brain, but proceeds completely analogously. There is no immediate way to exclude the possibility that the simulated brain can’t create a mind in exactly the same way as a real brain can. The only assumption here is that the physical processes in a brain are deterministic. There are some theories claiming that a mind requires non-deterministic quantum phenomena that can’t be simulated algorithmically. One such theory is presented by physicist Sir Roger Penrose in [7], who has founded the Penrose Institute to explore this possibility. If such theories turn out to be true, then this would be a strong argument against the possibility of strong artificial intelligence.

Non-Applicability of the Chinese Room Argument

As regards the Chinese Room Argument, it convincingly shows that the fact that a system gives the impression of understanding something doesn’t entail that it really understands it. Not every program that the person in the Chinese Room could execute in order to converse in Chinese does in fact create understanding. This is an important insight that refutes some common misconceptions, like the fact that IBM’s Deep Blue understands chess in the same way as a human does, or that Apple’s Siri understands spoken language. Deep Blue just calculates the payoff of certain moves, and Siri just transcribes one sequence of numbers into another (albeit in a sophisticated way). This definitely doesn’t create understanding or a mind.

Moreover, the Chinese Room Argument shows that the Turing Test is no reliable indicator of strong artificial intelligence. In this test, described by Alan Turing in [12], a human subject should converse with an unknown entity and decide whether it is talking to another human or a computer, solely based on the answers that the entity gives. If the computer repeatedly manages to trick the subject, we call it intelligent. This test only measures how good a computer is at giving the impression of being intelligent without making any restrictions as to how the computer does it internally, which, as we argued already, is an important factor in determining whether a computer really exhibits strong artificial intelligence.

Additionally, Searle’s argument shows that it is not the hardware itself that understands Chinese. Even if a hardware running a program creates a mind that understands Chinese, the person in the Chinese Room is the hardware and doesn’t understand Chinese.

It does not, however, refute the possibility that the hardware can create a mind that understands Chinese by executing the program. Assume there is a program that answers Chinese questions and creates mental processes that exhibit an understanding of the Chinese questions and answers. This assumption can not be refuted by the Chinese Room Argument. If we let the person in the room execute the program via pen and paper, it is correct that the person doesn’t understand Chinese. But the person is only the hardware in this case. Its mind does not equal the mind that is created by the execution of the program.

It might seem intuitively implausible that arithmetical operations carried out with pen and paper could give rise to a mind. But this can be made more plausible by assuming, as before, that the neuronal processes in the brain are simulated in the form of these arithmetical operations. The fact that a mind could not arise in such a way may be a false intuition. There is no immediately obvious logical reason to exclude this possibility. Similar things hold for Searle’s system of water pipes, beer can domino or other unorthodox hardware. If one assumes that a computer hardware can create a mind, one must grant that this is also possible for other, more exotic mechanical systems.

Whether it is indeed possible to create a mind by the execution of a program is still an open question. Maybe Roger Penrose turns out to be right that consciousness is a natural phenomenon that can’t be created by the deterministic interaction of particles. Are organisms really just algorithms? How can the parallel firing of tens of billions of neurons give rise to consciousness and a mind? As of now, neuroscience has not the slightest idea. However, I would say with some certainty that this question cannot be answered by thought experiments alone.

If you liked this article, you may also be interested in my article Gödel’s Incompleteness Theorem And Its Implications For Artificial Intelligence.

References

[1] Boden, Margaret A: Escaping from the Chinese Room. University of Sussex, School of Cognitive Sciences, 1987.

[2] Churchland, Paul M und Patricia Smith Churchland: Could a Machine Think? Machine Intelligence: Perspectives on the Computational Model, 1:102, 2012.

[3] Cole, David: The Chinese Room Argument. In: Zalta, Edward N. (Herausgeber): The Stanford Encyclopedia of Philosophy. Summer 2013. http://plato.stanford.edu/archives/ sum2013/entries/chinese-room/.

[4] Dennett, Daniel C: Fast thinking. 1987.

[5] Kurzweil, Ray: Locked in his Chinese Room. Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI, 2002.

[6] Maudlin, Tim: Computation and consciousness. The journal of Philosophy, pp 407–432, 1989.

[7] Penrose, Roger: The Emperor’s New Mind (1990). Vintage, London.

[8] Russell, Stuart Jonathan et al.: Artificial Intelligence: A Modern Approach. Prentice hall Englewood Cliffs, 1995.

[9] Searle, John: The Chinese Room Argument. Encyclopedia of Cognitive Science, 2001.

[10] Searle, John R: Minds, brains, and programs. Behavioral and brain sciences, 3(03):417–424, 1980.

[11] Searle, John R: Minds, brains, and science. Harvard University Press, 1984.

[12] Turing, Alan M: Computing machinery and intelligence. Mind, pp 433–460, 1950.

Related

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *