Chinese room

ImprimirCitar

The Chinese room is a thought experiment, originally proposed by John Searle, in his paper Minds, Brains and Programs, published in Behavioral and Brain Sciences in 1980, through which he tries to refute the validity of the Turing test and the belief that thought is simply computation.

Searle confronts the analogy between mind and computer when it comes to addressing the question of consciousness. The mind involves not only the manipulation of symbols (grammar or syntax), but also possesses a semantic capacity to be aware of, or aware of, the meanings of symbols.

Description

Searle and strong artificial intelligence

In 1995, when Herbert Simon and Allen Newell Simon wrote that "There are now machines that read and learn and can create", they were trying to imply that a solution had been given to the mind-body problem.

But Searle in his text Minds, Brains and Science (1984) attacks this thought, and with the Chinese room experiment he shows how a machine can perform an action without even understanding what it is doing. and why it does it. Therefore, according to Searle, the logic used by computers is nothing more than one that does not look for content in action like that used by human beings.

The Chinese Room Experiment

Suppose that many years have passed, and that the human being has built a machine apparently capable of understanding the Chinese language, which receives certain input data given by a natural speaker of that language, these inputs would be the signs that they are fed into the computer, which later provides a response on its output. Suppose in turn that this computer easily passes the Turing test, since it convinces the speaker of the Chinese language that it does fully understand the language, and therefore the Chinese will say that the computer understands its language.

Now Searle asks us to suppose that he is inside that computer completely isolated from the outside, except for some kind of device (a slot for sheets of paper, for example) through which texts written in Chinese can enter and exit.

Suppose also that outside the room or computer is the same Chinese man who thought the computer understood his language and inside this room is Searle who doesn't know a single word of said language but is equipped with a set of manuals and dictionaries that tell you the rules relating to Chinese characters (something like "If you enter such-and-such characters, type such-and-such characters").

In this way Searle, who manipulates these texts, is able to respond to any text in Chinese that is introduced to him, since he has the manual with the rules of the language, and thus make an external observer believe that he does understand Chinese, even if you have never spoken or read that language.

Given this situation, it is worth asking:

  • How can Searle respond if you do not understand the Chinese language?
  • Do the manuals know Chinese?
  • Can the whole system of the room (dictionaries, Searle and their answers) be considered as a system that understands Chinese?

According to the creators of the experiment, proponents of strong artificial intelligence—those who claim that suitable computer programs can understand natural language or possess other properties of the human mind, not simply simulate them—must admit that, or Either the room understands the Chinese language, or passing the Turing test is not sufficient proof of intelligence. For the creators of the experiment none of the components of the experiment understand Chinese, and therefore, even if the set of components passes the test, the test does not confirm that the person actually understands Chinese, since as we know Searle does not know that language.

The Chinese room argument

This is so in the context of the following argumentation:

  1. If strong artificial intelligence is true, there is a program for the Chinese language such that any mechanism that executes it understands Chinese.
  2. A person can mechanically execute a program for the Chinese language without understanding the Chinese language.
  3. The arguments of strong artificial intelligence are false because in reality the system does not understand Chinese, nothing else pretends to understand.

An important point: Searle does not deny that machines can think —the brain is a machine[citation needed] and it thinks—, he denies that when doing so they apply a program.

Criticism of the Chinese room experiment

Arguments such as Searle's in the philosophy of mind sparked a more intense debate about the nature of intelligence, the possibility of intelligent machines, and the value of the Turing test that continued through the 1980s and 1990s. The thought experiment of the Chinese room would confirm premise 2, in the opinion of its defenders. In the opinion of its detractors, premise 2 based on the inference from the thought experiment is not conclusive. Objections usually follow one of the following three lines:

  • Although the inhabitant of the room does not understand Chinese, it is possible that the wider system formed by the room, the manuals and the inhabitant understands Chinese.
  • A person manipulating written signs within a static room is not the only possible candidate to occupy the computer system position capable of understanding Chinese. There are other possible models:
  1. a robot capable of interacting with the environment and learning from it.
  2. a program that regulates the neuronal processes involved in the understanding of Chinese, etc.

There is no reason to say that these models only exhibit apparent understanding, as in the case of the room and its inhabitant, but they are models of artificial intelligence.

  • If the conduct of the room is not enough evidence that he understands Chinese, neither can it be the conduct of any person. It is made use of a rhetorical figure trying to disfigure the concept of understanding, since it is absurd to say that the machine cannot "refer to a man in Chinese" because "he does not really understand the Chinese", to understand in this way "it must be able to create the idea on the basis of the conceptual form", that theory of the idea may be more "idealist" than scientific. In any case a man's mind is based on more complex and more expensive algorithms to process than those currently possessing the current computer. It's a matter of time.
  • The acquisition of understanding (to what we can call intelligence) is an evolutionary process, in which different techniques and tools can be applied that bring the individual to a learning (in this case of the Chinese language), so it is necessary to establish levels of understanding in time.

"Harry"Thought Experiment

The physicalist philosopher William Lycan recognized the advancement of artificial intelligences, beginning to behave as if they had minds. Lycan uses the thought experiment of a humanoid robot named Harry who can converse, play golf, play the viola, write poetry and thereby manage to fool people as a person with a mind. If Harry were human, it would be perfectly natural to think that he has thoughts or feelings, which would suggest that Harry can actually have thoughts or feelings even if he is a robot. For Lycan "there is no problem with or objection to the experience qualitative in machines that is not equally a dilemma for such an experience in humans".

Responses to the arguments against the Chinese room

  • Although it is true that the system handles the Chinese, it does not understand the language, that is, the person in the room does not understand what he is doing, nothing else is doing. Therefore, if a computer handles a language, it does not mean that it understands what it is doing, it means that it is doing an action.
  • In turn, the person who handles the Chinese can not only be understood as a person, what tries to show the example is how computers don't understand what they handle and nothing else follow already certain rules without having to understand them.
  • The last proposition has many contradictions, for example, it is known that those who are reading these words do understand the language, contrary to the computer, that only manages syntactic information (i.e. that nothing else manages an action, without its own content), humans handle semantic information (with content, which ensures that we understand what we do, as we see content to what we do).
  • That a system can respond to actions with an action, does not mean that it has intelligence in itself, everything it does to respond to actions with an action is inintelligible to it, simply responds to actions with an action that happens through inferences, creating relationships and grouping, to determine a conclusion sufficient and likely to be wrong. It is thus far impossible to prove the opposite because there is no way to translate the biological into a machine, and the biological is extremely complex more than the artificial, the artificial being created is understandable in its entirety, but there are certain parts of the biological that cannot yet be interpreted, such is the case of the intelligence of the beings. The human being has abilities that can never be imitated by a machine, simply by being biological, natural.

Contenido relacionado

Error detection and correction

In mathematics, computer science and information theory, error detection and correction is an important practice for the maintenance and integrity of data...

Loss

The loss is the lack or deprivation of what was possessed. Related articles...

Michel henry

Michel Henry was a French philosopher and novelist. He was mainly known for his philosophical...
Más resultados...
Tamaño del texto:
Copiar