Artificial intelligence
In computer science, artificial intelligence is the discipline that tries to replicate and develop intelligence and its implicit processes through computers. There is no agreement on the full definition of artificial intelligence, but four approaches have been followed: two focused on humans (systems that think like humans, and systems that act like humans) and two centered around rationality (systems that think rationally and systems that act rationally). It began shortly after World War II, and the name was coined in 1956 by computer scientist John McCarthy at the Dartmouth Conference.
Artificial intelligence currently encompasses a wide variety of subfields, ranging from general-purpose areas such as learning and perception to more specific ones such as playing chess, proving mathematical theorems, writing poetry and the diagnosis of diseases. Artificial Intelligence synthesizes and automates tasks that are initially intellectual and is therefore potentially relevant to any field of human intellectual activity. In this sense, it is a genuinely universal field.
On the definition of the term
Colloquially, the term artificial intelligence is applied when a machine mimics the "cognitive" functions humans associate with other human minds, such as "perceiving," "reasoning," "learning," and "problem solving." Andreas Kaplan and Michael Haenlein define artificial intelligence as "the ability of a system to correctly interpret external data, to learn from that data, and to use that knowledge to achieve specific tasks and goals through flexible adaptation." machines become ever more capable, technology once thought to require intelligence is removed from the definition.
For example, optical character recognition is no longer perceived as an example of "artificial intelligence" having become a mainstream technology. Technological advances still classified as artificial intelligence are autonomous driving systems or those capable of gaming chess or Go.
Artificial intelligence is a new way of solving problems, which includes expert systems, the management and control of robots and processors, which tries to integrate knowledge into such systems, in other words, an intelligent system capable of to write your own program. An expert system defined as a programming structure capable of storing and using knowledge about a certain area that translates into its learning capacity. In the same way, AI can be considered as the ability of machines to use algorithms, learn from data and use what has been learned in decision-making just as a human being would do, as well as one of the main focuses of artificial intelligence is machine learning, in such a way that computers or machines have the ability to learn without being programmed for it.
According to Takeyas (2007), AI is a branch of computer science in charge of studying computational models capable of carrying out activities typical of human beings based on two of its primary characteristics: reasoning and behavior.
In 1956, John McCarthy coined the term "artificial intelligence", defining it as "the science and ingenuity of making intelligent machines, especially intelligent computer programs".
There are also different types of perceptions and actions, which can be obtained and produced, respectively, by physical sensors and mechanical sensors in machines, electrical or optical pulses in computers, as well as by inputs and outputs of software bits and their software environment.
Several examples are in the area of system control, automatic scheduling, the ability to respond to diagnostics and consumer inquiries, handwriting recognition, speech recognition, and pattern recognition. AI systems are now part of the routine in fields such as economics, medicine, engineering, transportation, communications, and the military, and have been used in a wide variety of computer programs, strategy games, such as computer chess, and others. video game.
Types
Stuart J. Russell and Peter Norvig differentiate several types of artificial intelligence:
- Systems that think like humans.- These systems try to emulate human thinking; for example artificial neural networks. Automation of activities that we link with human thinking processes, activities such as decision-making, problem solving and learning.
- Systems that act as humans.- These systems try to act as humans; that is, they imitate human behavior; for example, robotics (The study of how to make computers perform tasks that humans do better for the time being).
- Systems that think rationally.- That is, logically (ideally), they try to imitate the rational thought of the human being; for example, the expert systems, (the study of the calculations that make it possible to perceive, reason and act).
- Systems that act rationally.- They try to rationally emulate human behavior; for example, intelligent agents, which are related to intelligent behavior in artifacts.
Schools of Thought
AI falls into two schools of thought:
- Conventional artificial intelligence.
- Computer intelligence.
Conventional Artificial Intelligence
Also known as symbolic-deductive AI. It is based on the formal and statistical analysis of human behavior in the face of different problems:
- Case-based reasoning: Helping to make decisions while solving certain specific problems and, apart from being very important, require good functioning.
- Expert systems: They infer a solution through prior knowledge of the context in which it applies and deals with certain rules or relationships.
- Berriesian networks: Proposes solutions by probabilistic inference.
- Behavioral-based artificial intelligence: This intelligence contains autonomy and can be self-regulated and controlled to improve.
- Smart process management: Facilitates complex decision-making, proposing a solution to a particular problem as would be done by a specialist in such activity.
Computational Artificial Intelligence
Computational intelligence (also known as sub-symbolic-inductive AI) involves interactive development or learning (for example, interactive modifications of the parameters in systems of connections). The knowledge is achieved based on empiric facts.
Computational intelligence has a dual purpose. On the one hand, its scientific objective is to understand the principles that enable intelligent behavior (whether in natural or artificial systems) and, on the other, its technological objective is to specify the methods for designing intelligent systems.
History
- The term "Artificial Intelligence" was formally coined in 1956 during the Dartmouth Conference, but by then it had already been working on it for five years in which many different definitions had been proposed which had in no case been fully accepted by the research community. AI is one of the newest disciplines along with modern genetics.
- The most basic ideas go back to the Greeks, before Christ. Aristotle (384-322 BC) was the first to describe a set of rules that describe a part of the functioning of the mind to obtain rational conclusions, and Ctesibio de Alexandria (250 BC) built the first self-controlled machine, a water flow regulator (rational but without reasoning).
- In 1315 Ramon Llull in his book Magna Ars had the idea that the reasoning could be performed artificially.
- In 1840 Ada Lovelace predicted the ability of the machines to go beyond the simple calculations and provided a first idea of what the software would be.
- In 1936 Alan Turing formally designs Universal machine which demonstrates the viability of a physical device to implement any formally defined compute.
- In 1943 Warren McCulloch and Walter Pitts presented their model of artificial neurons, which is considered the first work of the field, even though the term did not yet exist. The first major advances began in the early 1950s with Alan Turing's work, from which science has gone through various situations.
- In 1955 Herbert Simon, Allen Newell and Joseph Carl Shaw, developed the first programming language aimed at problem solving, the IPL-11. A year later they develop the LogicTheorist, which was able to demonstrate mathematical theorems.
- In 1956 the term artificial intelligence was invented by John McCarthy, Marvin Minsky and Claude Shannon at the Dartmouth Conference, a congress in which ten-year triumphal forecasts were made, which led to the almost total abandonment of investigations for fifteen years.
- In 1957 Newell and Simon continue their work with the development of General Problem Solver (GPS). GPS was a troubleshooting system.
- In 1958 John McCarthy developed at the Massachusetts Institute of Technology (MIT) LISP. Its name is derived from LISt Processor. LISP was the first language for symbolic processing.
- In 1959 Rosenblatt introduced the “perceptron”.
- At the end of the 1950s and early 1960s, Robert K. Lindsay developed “Sad Sam”, a program for reading English prayers and the inference of conclusions from his interpretation.
- In 1963 Quillian develops semantic networks as a model of representation of knowledge.
- In 1964 Bertrand Raphael builds the SIR system (SIR)Semantic Information Retrieval) which was able to infer knowledge based on information supplied to it. Bobrow develops STUDENT.
- In the mid-1960s, expert systems appear, predicting the likelihood of a solution under a set of conditions. For example, DENDRAL, initiated in 1965 by Buchanan, Feigenbaum and Lederberg, the first Expert System, which assisted chemicals in complex chemical structures, MACSYMA, which assisted engineers and scientists in the solution of complex mathematical equations.
- Later between the years 1968-1970 Terry Winograd developed the SHRDLU system, which allowed interrogating and giving orders to a robot that moved into a world of blocks.
- In 1968 Marvin Minsky publishes Semantic Information Processing.
- In 1968 Seymour Papert, Danny Bobrow and Wally Feurzeig develop LOGO programming language.
- In 1969 Alan Kay develops Smalltalk language in Xerox PARC and is published in 1980.
- In 1973 Alain Colmenauer and his research team at the University of Aix-Marseille create PROLOG (from French PROgrammation in LOGique) a programming language widely used in AI.
- In 1973 Shank and Abelson develop scripts, or scripts, pillars of many current techniques in Artificial Intelligence and computer science in general.
- In 1974 Edward Shortliffe writes his thesis with MYCIN, one of the best known Expert Systems, who attended doctors in the diagnosis and treatment of infections in the blood.
- In the 1970s and 1980s, the use of expert systems grew, such as MYCIN: R1/XCON, ABRL, PIP, PUFF, CASNET, INTERNIST/CADUCEUS, etc. Some remain until today (Shells) like EMYCIN, EXPERT, OPSS.
- In 1981 Kazuhiro Fuchi announced the Japanese project of the fifth generation of computers.
- In 1986 McClelland and Rumelhart published Parallel Distributed Processing (Neuronal Networks).
- In 1988 the languages Oriented to Objects are established.
- In 1997 Gari Kaspárov, world chess champion, lost to Deep Blue autonomous computer.
- In 2006 the anniversary was celebrated with the Spanish Congress 50 years of Artificial Intelligence - Multidisciplinary Campus in Perception and Intelligence 2006.
- In 2009, therapeutic intelligent systems are already developing to detect emotions in order to interact with autistic children.
- In 2011 IBM developed a supercomputer named Watson, who won a round of three games followed by Jeopardy!, beating his two top champions, and winning a $1 million prize that IBM then donated to charity works.
- In 2016, a computer program won five to zero to Go European triple champion.
- In 2016, then President Obama talks about the future of artificial intelligence and technology.
- There are people who, when talking without knowing it with a chatbot, do not notice talking to a program, so that Turing's test is fulfilled as when it was formulated: "There will be Artificial Intelligence when we are not able to distinguish between a human being and a computer program in a blind conversation."
- In 2017 AlphaGo developed by DeepMind defeat 4-1 in a Go competition to world champion Lee Sedol. This event was very mediatic and marked a milestone in the history of this game. At the end of that same year, Stockfish, the chess engine considered the best in the world with 3 400 ELO points, was overwhelmingly defeated by AlphaZero with just knowing the rules of the game and after only 4 hours of training playing against himself.
- As anecdote, many of the researchers on AI argue that “intelligence is a program capable of being executed regardless of the machine that executes it, computer or brain”.
- In 2018, the first TV with Artificial Intelligence was released by LG Electronics with a platform called ThinQ.
- In 2019, Google presented its Doodle in which, with the help of Artificial Intelligence, pays tribute to Johann Sebastian Bach, in which, adding a simple melody of two compases, the AI creates the rest.
- In 2020, the OECD (Organization for Economic Cooperation and Development) publishes the intitulated working paper Hello, World: Artificial Intelligence and its Use in the Public Sector, aimed at government officials with the aim of highlighting the importance of AI and its practical applications in government.
Social, ethical and philosophical implications
Given the possibility of creating machines endowed with intelligence, it became important to worry about the ethical question of machines to try to guarantee that no harm is done to human beings, other living beings and even to the machines themselves according to some currents of thought. This is how a broad field of studies known as the ethics of artificial intelligence arose, of relatively recent appearance, which is generally divided into two branches: roboethics, in charge of studying the actions of human beings towards robots, and the ethics of machines. in charge of studying the behavior of robots towards human beings.
The accelerated technological and scientific development of artificial intelligence that has occurred in the XXI century also has a significant impact in other fields. In the world economy during the second industrial revolution there was a phenomenon known as technological unemployment, which refers to when the industrial automation of large-scale production processes replaces human labor. A similar phenomenon could occur with artificial intelligence, especially in processes involving human intelligence, as illustrated in the story How they had fun! by Isaac Asimov, in which its author envisions some of the effects that the interaction of intelligent machines specialized in child pedagogy, instead of human teachers, would have with school-age children. This same writer designed what are now known as the three laws of robotics, which appeared for the first time in the story Vicious Circle ( Runaround) of 1942, where he established the Next:
- First Law
- A robot will not harm a human being or, by inaction, will allow a human being to suffer harm.
- Second Law
- A robot must comply with the orders given by human beings, except those who enter in conflict with the first law.
- Third Law
- A robot must protect its own existence to the extent that this protection does not conflict with the first or second law.
Other, more recent science fiction works also explore some ethical and philosophical questions regarding strong Artificial Intelligence, such as the movies I, Robot or A.I. Artificial Intelligence, in which topics such as self-awareness or the origin of an emerging consciousness of intelligent robots or computer systems are dealt with, or if these could be considered subjects of law due to their almost human characteristics related to sentience, such as being able to feel pain and emotions or to what extent they would obey the objective of their programming, and if not, if they could exercise free will. The latter is the central theme of the famous Terminator saga, in which the machines surpass humanity and decide to annihilate it, a story that according to several specialists, could not be limited to science fiction and could be a possibility real in a posthuman society that relied on technology and machines totally.
Regulation
Law plays a fundamental role in the use and development of AI. Laws establish binding rules and standards of behavior to ensure social welfare and protect individual rights, and can help us reap the benefits of this technology while minimizing its risks, which are significant. At the moment there are no legal regulations that directly regulate AI. But dated April 21, 2021, the European Commission has presented a proposal for a European Regulation for the harmonized regulation of artificial intelligence (AI) in the EU. Its exact title is Proposal for a Regulation of the European Parliament and of the Council establishing harmonized rules on artificial intelligence –Artificial Intelligence Law– and amending other Union legislation.
Objectives
Reasoning and Problem Solving
Early researchers developed algorithms that mimicked the step-by-step reasoning humans use when solving puzzles or making logical deductions. By the late 1981–1990s, artificial intelligence research had developed methods for dealing with uncertain or incomplete information, using concepts of probability and economics.
These algorithms proved insufficient for solving large reasoning problems because they experienced a "combinatorial explosion": they became exponentially slower as the problems grew. In this way, it was concluded that human beings rarely use deduction step by step that early artificial intelligence research followed; instead, they solve most of their problems using quick and intuitive judgments.
Knowledge representation
Knowledge representation and knowledge engineering are central to classical AI research. Some "expert systems" try to collect the knowledge that experts possess in a particular field. In addition, other projects try to assemble the "common sense knowledge" known to the average person into a database containing extensive knowledge about the world.
Among the topics that a common sense knowledge base would contain are: objects, properties, categories and relationships between objects, situations, events, states and time causes and effects; Poole, Mackworth, and Goebel, 1998, p. 335–337 and knowledge about knowledge (what we know about what other people know) among others.
Planning
Another goal of artificial intelligence is to be able to set goals and achieve them. To do this, they need a way to visualize the future, a representation of the state of the world, and to be able to make predictions about how their actions will change it, as long as they can make decisions. decisions that maximize the utility (or "value") of the available options. Russell and Norvig, 2003, p. 600–604
In classical scheduling problems, the agent can assume that it is the only acting system in the world, allowing it to be sure of the consequences of its actions. However, if the agent is not the only actor, then it is required that he can reason under uncertainty. This requires an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment. Russell and Norvig, 2003, p. 430–449 Multi-agent planning uses the cooperation and competition of many systems to achieve a given goal. Emergent behavior like this is used by evolutionary algorithms and swarm intelligence. Russell and Norvig, 2003, p. 449–455
Learning
Machine learning has been a fundamental concept of artificial intelligence research since the field's inception; consists of the study of computer algorithms that improve automatically through experience.
Unsupervised learning is the ability to find patterns in an input stream, without requiring a human to label the inputs first. Supervised learning includes classification and numerical regression, which requires a human to first label the input data. Sorting is used to determine what category something belongs to, and occurs after a program looks at several examples of input from various categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. Both classifiers and regression learners attempt to learn an unknown function; for example, a spam classifier can be viewed as learning a function that assigns the text of an email to one of two categories, "spam" or "not spam”. Computational learning theory can test students by computational complexity, sample complexity (how much data is required), or by other notions of optimization.
Natural Language Processing
Natural language processing enables machines to read and understand human language. A sufficiently efficient natural language processing system would allow natural language user interfaces and the acquisition of knowledge directly from human written sources, such as news texts. Some simple applications of natural language processing include information retrieval, text mining, question answering, and machine translation. Many approaches use word frequencies to construct syntactic representations of text. “Keyword spotting” search strategies are popular and scalable, but less than optimal; a search query for "dog" can only match documents containing the literal word "dog" and miss a document with the word "poodle". Statistical language processing approaches can combine all of these strategies, as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond processing semantics, the ultimate goal of processing is to incorporate a full understanding of common sense reasoning. In 2019, transformer-based deep learning architectures could generate coherent text.
Perception
Machine perception is the ability to use input from sensors (such as visible or infrared cameras, microphones, wireless signals, and lidar, sonar, radar, and touch sensors) to infer aspects of the world. Applications include speech recognition, facial recognition, and object recognition. Russell and Norvig, 2003, p. 885–892 Computer vision is the ability to analyze visual information, which is often ambiguous; a giant fifty-meter-tall pedestrian far away can produce the same pixels as a normal-sized pedestrian nearby, requiring artificial intelligence to judge the relative likelihood and reasonableness of different interpretations, for example, using its "information model." object” to assess that fifty-meter pedestrians do not exist.
Criticism
The main criticisms of artificial intelligence have to do with its ability to fully mimic a human being. However, there are experts[citation needed] on the subject who indicate that no individual human has the capacity to solve all kinds of problems, and authors such as Howard Gardner have theorized about the solution.
In humans, problem-solving ability has two aspects: the innate aspects and the learned aspects. The innate aspects allow, for example, to store and retrieve information in memory, while the learned aspects reside in knowing how to solve a mathematical problem using the appropriate algorithm. In the same way that a human must have tools that allow him to solve certain problems, artificial systems must be programmed in such a way that they can solve them.
Many people consider that the Turing Test has been passed, citing conversations in which when conversing with an artificial intelligence chat program they don't know they are talking to a program. However, this situation is not equivalent to a Turing test, which requires the participant to be alerted to the possibility of speaking to a machine.
Other thought experiments such as John Searle's Chinese Room have shown how a machine could simulate thought without actually possessing it, passing the Turing test without even understanding what it is doing, just reacting in a specific way to certain stimuli (in the broadest sense of the word). This would show that the machine is not really thinking, since acting according to a preset program would suffice. If for Turing the fact of deceiving a human being who tries to avoid being deceived is a sign of an intelligent mind, Searle considers it possible to achieve this effect by means of a priori defined rules.
One of the biggest problems in artificial intelligence systems is communication with the user. This obstacle is due to the ambiguity of the language, and goes back to the beginnings of the first computer operating systems. The ability of humans to communicate with each other implies knowledge of the language used by the interlocutor. For a human to be able to communicate with an intelligent system, there are two options: either the human learns the language of the system as if he were learning to speak any other language other than the native one, or the system has the ability to interpret the user's message. in the language that the user uses. There may also be damage to their installations.
A human, throughout his life, learns the vocabulary of his native or mother tongue, being able to interpret messages (despite the polysemy of words) and using context to resolve ambiguities. However, he must know the different meanings to be able to interpret, and that is why specialized and technical languages are known only by experts in the respective disciplines. An artificial intelligence system faces the same problem, the polysemy of human language, its unstructured syntax and the dialects between groups.
Developments in artificial intelligence are greater in disciplinary fields in which there is greater consensus among specialists. An expert system is more likely to be programmed in physics or medicine than in sociology or psychology. This is due to the problem of consensus among specialists in the definition of the concepts involved and in the procedures and techniques to be used. For example, in physics there is agreement on the concept of speed and how to calculate it. However, in psychology the concepts, etiology, psychopathology, and how to proceed when faced with a certain diagnosis are discussed. This makes it difficult to create intelligent systems because there will always be disagreement about how the system should act for different situations. Despite this, there are great advances in the design of expert systems for diagnosis and decision-making in the medical and psychiatric fields (Adaraga Morales, Zaccagnini Sancho, 1994).
When developing a robot with artificial intelligence, care must be taken with autonomy, care must be taken not to link the fact that the robot interacts with human beings to its degree of autonomy. If the relationship between humans and the robot is of the master-slave type, and the role of humans is to give orders and that of the robot to obey them, then it is possible to speak of a limitation of the robot's autonomy. But if the interaction of humans with the robot is peer to peer, then their presence does not have to be associated with restrictions so that the robot can make its own decisions. With the development of artificial intelligence technology, many software companies such as deep learning and natural language processing have started to be produced, and the number of movies on artificial intelligence has increased. Stephen Hawking warned about the dangers of artificial intelligence and considered it a threat to the survival of humanity.
Applications of artificial intelligence
Techniques developed in the field of artificial intelligence are numerous and ubiquitous. Commonly, when a problem is solved by artificial intelligence, the solution is incorporated into areas of the industry and the daily life of computer program users, but the popular perception forgets the origins of these technologies that are no longer perceived as artificial intelligence. This phenomenon is known as the AI effect.
- Computational language
- Data mining (Data Mining)
- Industry
- Medicine
- Virtual worlds
- Processing of natural language (Natural Language Processing)
- Robotics
- Control systems
- Decision support systems
- Video games
- Computer prototypes
- Dynamic system analysis
- Simulation of crowds
- Operational systems
- Automotive
Intellectual property of artificial intelligence
Talking about intellectual property attributed to artificial intelligence creations creates a strong debate around whether a machine can be copyrighted. According to the World Intellectual Property Organization (WIPO), any creation of the mind can be part of intellectual property, but it does not specify whether the mind must be human or can be a machine, leaving artificial creativity in uncertainty.
Around the world, different legislations have begun to emerge in order to manage artificial intelligence, both its use and creation. Legislators and members of the government have begun to think about this technology, emphasizing the risk and complex challenges of it. Looking at the work created by a machine, the laws question the possibility of granting intellectual property to a machine, opening a discussion regarding AI-related legislation.
On February 5, 2020, the US Copyright Office and WIPO attended a symposium where they took an in-depth look at how the creative community uses artificial intelligence (AI) to create original work. The relationships between artificial intelligence and copyright were discussed, what level of involvement is sufficient for the resulting work to be valid for copyright protection; the challenges and considerations of using copyrighted inputs to train a machine; and the future of artificial intelligence and its copyright policies.
WIPO Director General Francis Gurry raised concerns about the lack of attention to intellectual property rights, as people tend to focus on cybersecurity, privacy and data integrity issues when talking of artificial intelligence. Likewise, Gurry questioned whether the growth and sustainability of AI technology would lead us to develop two systems for managing copyrights – one for human creations and one for machine creations.
There is still a lack of clarity in understanding around artificial intelligence. Technological developments are advancing at a rapid pace, increasing their complexity into political, legal and ethical issues that deserve global attention. Before finding a way to work with copyright, it is necessary to understand it correctly, since it is not yet known how to judge the originality of a work that is born from a composition of a series of fragments of other works.
The assignment of copyright around artificial intelligence has not yet been regulated due to a lack of knowledge and definitions. There is still uncertainty about whether, and to what extent, artificial intelligence is capable of producing content autonomously and without any human involvement, something that could influence whether its results can be protected by copyright.
The general copyright system has yet to adapt to the digital context of artificial intelligence, as they are focused on human creativity. Copyright is not designed to handle any policy issues related to the creation and use of intellectual property, and it can be harmful to overstretch copyright to address peripheral issues because:
"Using copyright to govern artificial intelligence is unwise and contradictory to copyright's primary function of providing an enabling space for creativity to flourish"
The conversation about intellectual property will need to continue to ensure that innovation is protected but also has room to flourish.
In popular culture
At the movies
AI is increasingly present in society, the evolution of technology is a reality and with it, the production of films on this subject. It should be noted that there have been audiovisual pieces on artificial intelligence for a long time, either including characters or showing a moral and ethical background. Below is a list of some of the major movies dealing with this topic:
- Ex Machina (2015): In the interpretation of Alicia Vikander, incredibly edited, like Ava, we find a probable Turing-proof robot hidden in the mansion of a genius, Nathan, a little crazy. And it is, we speak of a strange creation that feels totally real and at the same time inhuman. It's considered one of the best movies that artificial intelligence treats. This is mainly because it seems to cover the entire AI concept integrated in a film: the protagonist is a substitute for the human being and enters us into a multitude of moral arguments that surround this, while we see a narrative arc of thriller that, of course, ends up hooking us. Of course here the representation of the IA character is not black or white. Ava's not good, but it's not really bad either. And in this, the public remains reflecting on deep issues about the nature of the AI.
- Minority Report (2002): Steven Spielberg's AI film, Minority Report, follows John (Tom Cruise), an agent of the law, who is accused of a murder he will commit in the future. In this early 2000 film, the protagonist uses a technology of the future that allows the police to catch the criminals before they have committed a crime. In Minority Report, the AI is represented through the Precogs, the twins who have psychic skills. The Precogs see the murders before they happen, which allows law enforcement to pursue the crime before it is committed. Instead of the cyborg-type physical robots, here explores AI by using humans.
- Matrix (1999): In this film Keanu Reeves interprets Thomas Anderson/Neo, a day programmer and night hacker trying to unravel the hidden truth after a simulation known as "Matrix". This simulated reality is the product of artificial intelligence programs that end up enslaving humanity and using their bodies as a source of energy.
- Me, robot. (2004): This science fiction film starring Will Smith is set in 2035, in a society where humans live in perfect harmony with intelligent robots in which they trust everything. Problems emerge to the surface when an error in the programming of a supercomputer named VIKI leads you to believe that robots should take the reins to protect humanity from itself.
- Artificial Intelligence (2001): A Cybertronics Manufacturing worker adopts David momentarily to study his behavior. Both he and his wife end up treating the artificial child and his own biological son. Despite the affection they profess, David feels the need to escape from his home and start a journey that will help him discover who he really belongs to. In his perplexed eyes, a new dark, unjust, violent, insensitive world will open. Something that will be difficult for you to accept. He asks things like: how is it possible that he feels something as real as love and that he is artificial?
- Her (2013): This Spike Jonze film tells the story of a writer of letters who is alone and about to get divorced. This character was represented by the award-winning Joaquin Phoenix. This man bought an operating system with artificial intelligence to use it to please all users and adapt to their needs. However, the result is that it develops a romantic feeling with Samantha. Who is the female voice of the operating system.
Contenido relacionado
QNX
Wide area network
8-bit character code