John Searle presents the argument that simply understanding the syntax of human language is not enough to prove a computer is capable of thinking. It is a rebuttal to the Strong AI Hypothesis, which believe it is possible for an AI to have consciousness.
The Argument
Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore … that I know no Chinese, either written or spoken … However, the rules are in English … They enable me to correlate one set of formal symbols with another set of formal symbols’ (i.e. the Chinese characters).
These rules allow responses, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot.
Searle claims that if a strong AI were in that room, it too cannot understand Chinese.
The main argument is that, without ‘understanding’ of the task at hand, it is impossible to to conclude that the machine is ‘thinking’.
Searle’s view
- Thinking involves more than information syntactical processing.
- Syntax means the properties of a language as symbols and uninterpreted.
- Computers do not have semantic understanding.
- Semantics means the properties of a language interpreted and understood.
- Thinking involves more than information processing.
- Information processing on its own is syntactic and not semantic.
Searle’s Axioms
(A1) Programs are formal (syntactic).
(A2) Minds have mental contents (semantics).
(A3) Syntax by itself is neither constitutive of nor sufficient for semantics. (A4) Brains cause minds.
(C1) Programs are neither constitutive of nor sufficient for minds. (C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains. (C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program. (C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.
Objecting Arguments
System Argument
A Chinese speaker, as a whole, understands Chinese. It is not just a part of his, brain; his entire being understands Chinese. Therefore, he as a whole, is considered as a system that understands Chinese.
When Searle is in the room, he may not understand Chinese, but if the room is viewed as a whole, it understands Chinese
Searle’s Reply: Searle responds by simplifying this list of physical objects: he asks what happens if the man memorizes the rules and keeps track of everything in his head? Then the whole system consists of just one object: the man himself. Searle argues that if the man does not understand Chinese, then the system does not understand Chinese either because now “the system” and “the man” both describe exactly the same object.’ (Wikipedia, 2023)
Even though the man himself does not understand Chinese the system does. Why? Because there is no specific part of a Chinese speaker than understands Chinese. Therefore, the speaker, as a system, understands Chinese.
Robot Argument
The robot argument is used to state that every word derives meaning from somewhere. A Chinese speaker understands that 火 means fire, an English speaker does not. The robot argument states that if the entire room was inside a robot, which could have sensory parts, it could interact with the environment to attach meaning to the symbols it was given.
Searle’s Reply: Searle’s reply is to suppose that some of the inputs came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean. Searle writes, “he doesn’t see what comes into the robot’s eyes”. (Wikipedia, 2023)
Brain Simulator Argument
A computer that simulates not only the linguistic behaviour of a Chinese speaker, but also simulates the actual brain structure of a Chinese speaker, down to the level of neuron firings, really understands Chinese.
Searle’s Reply: ‘Searle replies that such a simulation does not reproduce the important features of the brain – its causal and intentional states. Searle is adamant that “human mental phenomena are dependent on actual physical / chemical properties of actual human brains’. (Wikipedia, 2023
Other Minds Argument
One is justified in attributing intelligence to an individual on the basis of purely behavioural criteria. We do not truly know whether or not other humans are thinking, but we infer this from the fact that their behaviour is like ours and exhibits thinking-like qualities. We should therefore apply the same criteria to computers.
Searle’s Reply: We do not attribute thinking to other humans simply on the basis of evidence of thought. It is also based on our knowledge of their biological construction. We do not understand how nature brings consciousness into existence, but it seems unlikely that building a machine is going to replicate whatever occurs during gestation to create consciousness, which is (arguably) necessary for understanding.