Three laws of robotics

ImprimirCitar

The three laws of robotics or Asimov's laws are a set of rules developed by science fiction writer Isaac Asimov that apply to most robots. of their works and that they are designed to follow orders. First appearing in the 1942 story Runaround, they state the following:

First Law
A robot will not harm a human being, nor by inaction will a human being suffer harm.
Second Law
A robot must comply with the orders given by human beings, except those who enter in conflict with the first law.
Third Law
A robot must protect its own existence to the extent that this protection does not conflict with the first or second law.

These laws form an organizing principle and unifying theme for Asimov's robotics-based fiction, which appears in his Robot series, the stories tied to it, and his Lucky Starr series of adult fiction. youths. In that universe, the laws are "mathematical formulations imprinted on the positronic brain pathways" of the robots (lines of code from the law-enforcement program stored in the robot's main memory), and cannot be circumvented, since They are intended as a security feature.

The original laws have been modified and developed by Asimov and others. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had assumed responsibility for the governance of entire planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

Zero Law
A robot cannot harm humanity or, by inaction, allow humanity to suffer damage.

The Three Laws and the Zero Law have permeated science fiction and are mentioned in many books, movies, and other media. They have also impacted thinking on the ethics of artificial intelligence.

Purpose

These three laws arise solely as a measure of protection for human beings. According to Asimov himself, the conception of the laws of robotics wanted to counteract a supposed "Frankenstein complex", that is, a fear that human beings would develop in the face of machines that could hypothetically rebel and rise up against their creators. If he even attempted to disobey one of the laws, the robot's positronic brain would be irreversibly damaged and the robot would "die." At a first level, it does not present any problem to provide robots with such laws, after all, they are machines created by man to help him in various tasks. The complexity lies in the fact that the robot can distinguish which are all the situations covered by the three laws, that is, being able to deduce them at the moment. For example, knowing in a certain situation if a person is in danger or not, and deducing the source of the damage or the solution.

The three laws of robotics represent the moral code of the robot. A robot will always act under the imperatives of its three laws. For all intents and purposes, a robot will behave like a morally correct being. However, it is legitimate to ask: Is it possible for a robot to violate any law? Is it possible for a robot to "harm" to a human? Most of Asimov's robot stories are based on paradoxical situations in which, despite the three laws, we could answer the above questions with a "yes".

History

In The Rest of the Robots, published in 1964, Isaac Asimov noted that when he began writing in 1940 he felt that "one of the common plot lines of science fiction was... robots were created and destroyed their creator. Knowledge has its dangers, yes, but must the response be a withdrawal from knowledge? Or should knowledge be used as a barrier to the dangers it brings? &#3. 4; He decided that in his stories a robot would not "stupidly turn against his creator for no other purpose than to demonstrate, once again, the crime and punishment of Faust."

On May 3, 1939, Asimov attended a meeting of the Queens (New York) Science Fiction Society where he met Earl and Otto Binder, who had recently published a short story 'I, Robot'; with a cute robot named Adam Link who was misunderstood and motivated by love and honor. (This was the first of a series of ten stories; the following year, 'Adam Link's Revenge' (1940) featured Adam thinking 'A robot should never kill a human, because of his own will"). Asimov admired the story. Three days later, Asimov began writing "My Own Story of a Nice and Noble Robot," his fourteenth story. to John W. Campbell, editor of Astounding Science-Fiction. Campbell turned it down, claiming it sounded too much like 'Helen O'Loy' for her. by Lester del Rey, published in December 1938, the story of a robot who looks so much like a person that he falls in love with his creator. and becomes his ideal wife.Frederik Pohl published the story under the title & # 34;Strange Playfellow & # 34; in Super Science Stories of September 1940.

Asimov attributes the Three Laws to John W. Campbell, from a conversation that took place on December 23, 1940. Campbell claimed that Asimov already had the Three Laws in his mind and that they simply needed to be stated explicitly. Several years later, Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men, a suggestion Asimov enthusiastically adopted.According to his autobiographical writings, Asimov included the "no action" clause. & # 3. 4; from First Law due to Arthur Hugh Clough's poem "The Last Decalogue" (text on Wikisource), which includes the satirical lines "You won't kill, but you don't have to work to stay alive".

Although Asimov fixes the creation of the Three Laws on a particular date, their appearance in his literature occurred during a period. He wrote two robot stories with no explicit mention of the Laws, "Robbie"; and "Reason". However, he assumed that the robots would have certain inherent safeguards. & # 34; Liar! & # 34;, the third robot story of his, he mentions the First Law for the first time, but not the other two. The three laws finally appeared together in 'Runaround'. When these stories and several others were compiled into the anthology I, Robot, "Reason" and "Robbie" were updated to acknowledge the Three Laws, although the material Asimov added to "Reason" it is not entirely consistent with the Three Laws as he described them elsewhere.

In his short story 'Evidence,' Asimov lets his recurring character, Dr. Susan Calvin, lay out a moral basis behind the Three Laws. Calvin points out that humans are typically expected to refrain from harming other humans (except in times of extreme duress like war, or to save a larger number) and this is equivalent to the First Law of a robot. Likewise, according to Calvin, society expects individuals to obey the instructions of recognized authorities such as doctors, teachers, etc., which is equivalent to the Second Law of Robotics. Lastly, humans are normally expected to avoid harming themselves, which is the Third Law for a robot.

The plot of "Evidencia" revolves around the question of telling a human from a robot built to look human; Calvin reasons that if such an individual obeys the Three Laws, he may be a robot or just "a very good man." Another character asks Calvin if robots are so different from humans after all. She replies: "Different worlds. The robots are essentially decent".

Asimov later wrote that he should not be commended for creating the Laws, because they are "obvious from the start, and subliminally known to everyone. The Laws were never expressed in short sentences until I got the job done. The Laws apply, of course, to all tools used by humans", and "analogs to the Laws are implicit in the design of almost all tools, robotic or not":

  1. Law 1: The use of a tool should not be dangerous. Hammers have handles and screwdrivers have grips to help increase grip. Of course, a person may be injured with one of these tools, but that injury is only due to his incompetence, not the design of the tool.
  2. Law 2: A tool should perform its function efficiently unless it harms the user. This is the only reason that circuit breakers exist due to ground failure. Any running tool will have its power cut if a circuit detects that some current is not returning to the neutral cable and therefore could be flowing through the user. User security is paramount.
  3. Law 3: A tool must remain intact during its use unless its destruction is necessary for its use or safety. For example, Dremel discs are designed to be as resistant as possible without breaking unless the job requires spending. In addition, they are designed to break at a point before the speed of the shrapnel can seriously injure someone (in addition to the eyes, although they should still wear safety glasses at all times).

Asimov believed that, ideally, humans would also follow the Laws:

I have my answer list every time someone asks me if I think my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become sufficiently versatile and flexible to be able to choose between different courses of behavior.

My answer is: "Yes, the Three Laws are the only way that rational humans can deal with robots, or anything else."

—But when I say that, I always remember (supposedly) that human beings are not always rational.

The Zeroth Law

The "zero law of robotics" is a variation on the laws of robotics that first appears in Isaac Asimov's novel Robots and Empire (1985). In said work, the law is elaborated by R. Daneel Olivaw after a discussion held with the earthling Elijah Baley on his deathbed.[citation needed] Later in the novel, Daneel first cites the law with the following formulation[citation required]:

A robot will not harm Humanity or, by inaction, allow Humanity to suffer harm.

derived from the name "zero" that the other three laws of robotics are hierarchically subordinated to this new law. However, in the novel itself, the ability of a robot to comply with said hierarchy is questioned when R. Giskard Reventlov is destroyed after breaking the first law trying to apply the zeroth law.[citation required]

Asimov used the zeroth law as a link between his robot novels and those in the Foundation series.[citation needed] The character of R. Daneel Olivaw appears in later novels such as Foundation and Earth (1986) and Prelude to the Foundation (1988) in the role of secret protector and guide of the human species in its expansion through the galaxy, as well as instigating the creation of both the Galactic Empire and later psychohistory. In said works Daneel implies that his actions are derived from the application of said zero law. [citation needed ]

Applications to future technology

ASIMO is an advanced humanoid robot developed by Honda. Showed here at Expo 2005.

Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must program them. Significant advances in artificial intelligence are needed for this, and even if AI could achieve human-level intelligence, the inherent ethical complexity, as well as cultural or contextual dependence on laws, prevents them from being a good candidate for formulating design constraints. robotics. However, as the complexity of robots has increased, so has interest in developing guidelines and safeguards for their operation.

In a 2007 guest editorial in Science magazine on the topic of 'Robot Ethics,' science fiction author Robert J. Sawyer argues that since the US military is a major source of funding for robotics research (and already uses armed UAVs to kill enemies) such laws are unlikely to be included in their designs. In a separate essay, Sawyer generalizes this argument to cover other industries that claim:

The development of AI is a business, and companies are notoriously disinterested in the fundamental safeguards, especially philosophical ones. (Some quick examples: the tobacco industry, the automotive industry, the nuclear industry. None of them has said from the beginning that fundamental safeguards are necessary, each of them has resisted the safeguards imposed from abroad and none has accepted an absolute edict. against harming humans).

David Langford has suggested an ironic set of laws:

  1. A robot will not damage authorized government personnel, but will eliminate intruders with extreme injury.
  2. A robot shall obey the orders of authorized personnel except when such orders enter into conflict with the Third Law.
  3. A robot will protect its own existence with lethal anti-personnel weaponry, because a robot is very expensive.

Roger Clarke (also known as Rodger Clarke) wrote a couple of articles looking at the complications in implementing these laws should systems one day be able to employ them. He argued that 'Asimov's Laws of Robotics have been a very successful literary device. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov's stories refutes the claim with which he began: It is not possible to reliably constrain the behavior of robots. devising and applying a set of rules ". On the other hand, Asimov's latest novels The Robots of Dawn, Robots and Empire and Foundation and Earth imply that the robots inflicted their worst long-term damage by perfectly obeying the Three Laws, thus depriving humanity of inventive or risky behavior.

In March 2007, the South Korean government announced that it would issue a "Robotic Ethics Charter" that would set standards for both users and manufacturers. According to Park Hye-Young of the Ministry of Information and Communication, the Charter may reflect Asimov's Three Laws, attempting to establish ground rules for the future development of robotics.

Futurist Hans Moravec (a prominent figure in the transhumanist movement) proposed that the laws of robotics should accommodate 'corporate intelligences,' corporations driven by artificial intelligence and robotic manufacturing power which Moravec believes will emerge in the near future. In contrast, David Brin's novel Foundation's Triumph (1999) suggests that the Three Laws may become obsolete: robots use the Law Zero to rationalize First Law and the robots hide from humans so Second Law never comes. in Game. Brin even portrays R. Daneel Olivaw worrying that if robots continue to reproduce, the Three Laws would become an evolutionary handicap and natural selection would wipe out the Laws, Asimov's careful foundation undone by evolutionary computation. Although robots would not evolve through design rather than mutation because robots would have to follow the Three Laws when designing and the prevalence of the laws would be ensured, Design flaws or construction errors could functionally replace the biological mutation.

In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy (Raytheon Professor of Computer Science and Engineering at Texas A&M) and David D. Woods (Director of the Cognitive Systems Engineering Laboratory at Ohio State) proposed "The Three Laws of Responsible Robotics" as a way to stimulate discussion about the role of responsibility and authority in designing not just a single robotic platform but the larger system in which the platform operates. The laws are as follows:

  1. A human being cannot deploy a robot without the human-robot working system meeting the highest legal and professional standards of safety and ethics.
  2. A robot must respond to humans as appropriate for their roles.
  3. A robot must be endowed with sufficient autonomy to protect its own existence provided that such protection provides a smooth transfer of control that does not conflict with the First and Second Laws.

Woods said: "Our laws are a little more realistic and therefore a little more boring" and that "The philosophy has been, 'Sure, people make mistakes, but robots will be better, a perfect version of ourselves.' We wanted to write three new laws to make people think about the human-robot relationship in more realistic and informed ways".

In October 2013, Alan Winfield suggested at a EUCog meeting 5 revised laws that had been published, with comments, by the EPSRC/AHRC working group in 2010:

  1. Robots are multipurpose tools. Robots should not be designed alone or primarily to kill or harm humans, except in the interest of national security.
  2. Humans, not robots, are responsible agents. Robots should be designed and operated to the extent possible to comply with existing laws, fundamental rights and freedoms, including privacy.
  3. Robots are products. They must be designed using processes that guarantee their safety and protection.
  4. Robots are manufactured devices. They should not be deceptively designed to exploit vulnerable users; instead, their machine nature should be transparent.
  5. It must be attributed to the person with the legal responsibility of a robot.

Criticism

The philosopher James H. Moor says that if applied thoroughly it would produce unexpected results. He gives the example of a robot that roams the world trying to keep humans from harm.

Marc Rotenberg, president and CEO of the Electronic Privacy Information Center (EPIC) and professor of information privacy law at Georgetown Law, argues that robotics laws should be expanded to include two new laws:

  • a Fourth Law, according to which a Robot must be identified before the public ("symmetric identification")
  • a Fifth Law, which dictates that a Robot must be able to explain to the public its decision-making process (" Algorithmic Transparency").

Contenido relacionado

Il buono, il brutto, il cattivo

Il buono, il brutto, il cattivo is a film of the spaghetti western subgenre released in 1966 and co-produced between Italy, Spain and Germany. It was directed...

Wedge Antilles

The seven samurai

The Seven Samurai is a 1954 Japanese film directed by Akira Kurosawa. He won the Silver Lion at the Venice Mostra and two Oscar...
Más resultados...
Tamaño del texto:
Copiar