Robotics laws

ImprimirCitar

The laws of robotics are a set of laws, rules or principles, which are intended as a fundamental framework to support the behavior of robots designed to have a certain degree of autonomy. Robots of this degree of complexity do not yet exist, but have been widely anticipated in science fiction, movies, and are a subject of active research and development in the fields of robotics and artificial intelligence.

N#34;Three Laws of Robotics " by Isaac Asimov

The best-known set of laws is the "Three Laws of Robotics" by Isaac Asimov. These were introduced in his 1942 short story 'Runaround', though they were foreshadowed in some earlier stories. The three laws are:

  1. A robot cannot harm a human being or, by inaction, allow a human being to suffer damage.
  2. A robot must obey the orders given to him by human beings, except when such orders conflict with the First Law.
  3. A robot must protect its own existence provided that such protection does not conflict with the First or Second Law.

In The Avoidable Conflict, the machines generalize the First Law to mean:

  1. "No machine can harm mankind; or, by inaction, allow mankind to suffer harm."

This was refined at the end of Foundation and Earth, a zeroth law was introduced, with the original three suitably rewritten as subservient to it:

0. A robot cannot harm humanity or, by inaction, allow humanity to suffer damage.

There are adaptations and extensions based on this framework. As of 2011 they remain a 'dummy device'.

EPSRC / AHRC Principles of Robotics

In 2011, the UK's Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) jointly published a set of five "ethical principles for designers, builders and robot users" in the real world, along with seven "high-level messages" intended to be broadcast, based on a September 2010 research workshop:

  1. Robots should not be designed alone or primarily to kill or harm human beings.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed to ensure their safety and protection.
  4. Robots are devices; they should not be designed to exploit vulnerable users by causing an emotional response or dependency. It should always be possible to distinguish a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

The messages that were intended to be transmitted were:

  1. We believe that robots have the potential to provide an immense positive impact on society. We want to encourage responsible robotic research.
  2. Bad practice hurts us all.
  3. Addressing obvious public concerns will help us all progress.
  4. It is important to demonstrate that we, as specialists in robotics, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research, we must work with experts from other disciplines, including: social science, law, philosophy and arts.
  6. We must consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit ourselves to taking the time to contact the reporters.

The EPSRC principles are widely recognized as a useful starting point. In 2016 Tony Prescott organized a workshop to review these principles, for example, to differentiate ethical principles from legal ones.

Satya Nadella Laws

In June 2016, Satya Nadella, CEO of Microsoft, had an interview with Slate magazine and outlined roughly five rules for AIs to be observed by their designers:

  1. "AI must be designed to help humanity," which means that human autonomy must be respected.
  2. "AI must be transparent," which means humans must know and be able to understand how they work.
  3. "AI must maximize efficiency without destroying people's dignity."
  4. "The AI must be designed for intelligent privacy," which means you gain confidence by protecting your information.
  5. "AI must have a algorithmic responsibility for humans to undo involuntary harm."
  6. "AI must protect itself from prejudice" in order not to discriminate against people.

"Laws of Robotics" by Tilden

Mark W. Tilden is a robotics physicist who pioneered the development of simple robotics. His three guiding principles/rules for robots are:

  1. A robot must protect its existence at all costs.
  2. A robot should get and maintain access to its own power source.
  3. A robot must continually search for better sources of energy.

What is remarkable about these three rules is that they are basically rules for the "wild"life, so in essence what Tilden stated was that he wanted to "protect Sort of silicon in sensitivity, but with full control over the specs. Not vegetable. Not animal. Something else."

Judicial development

Another comprehensive terminology codification for the legal evaluation of technological advances in the robotics industry has already begun, mainly in Asian countries. This progress represents a contemporary reinterpretation of law (and ethics) in the field of robotics, an interpretation that assumes a rethinking of traditional legal constellations. These mainly include issues of legal liability in civil and criminal law.

Contenido relacionado

Warchalking

Warchalking is a language of symbols usually written in chalk on walls that inform potential stakeholders of the existence of a wireless network at that...

The Towering Inferno

The Towering Inferno is a 1974 American action-drama disaster film produced by Irwin Allen, directed by John Guillermin, and starring Steve McQueen and Paul...

Porcelain

Porcelain is a ceramic material produced by hand or industrially and traditionally white, compact, brittle, hard, translucent, impermeable, resonant, low in...
Más resultados...
Tamaño del texto:
Copiar