History of artificial intelligence

Compartir Imprimir Citar

The [Paleolithic] emerged definitively from some works published in the 1940s that did not have a great impact, but from the influential work in 1950 by Alan Turing, a British mathematician, the a new discipline of information science.

Although the essential ideas go back to the logic and algorithms of the Greeks, and to the mathematics of the Arabs, the concept of obtaining artificial reasoning appears in the XIV. At the end of the XIX century, sufficiently powerful formal logics are obtained, and by the middle of the XX, machines capable of making use of such solution logics and algorithms are obtained.

Tipping point of discipline

In his landmark 1950 paper, Turing proposed that the question "Can a machine think?" it was too philosophical to be of value, and to make it more concrete, he proposed an "imitation game," the Turing test, involving two people and a computer. One person, the interrogator, sits in a room and types questions into a computer terminal. When the answers appear in the terminal, the interrogator tries to determine if they were made by another person or by a computer. If he acts intelligently, according to Turing he is intelligent. Turing pointed out that a machine could fail and still be intelligent. Yet he believed that machines could pass the test by the end of the XX century.

In any case, this test did not have the expected practical value, although its theoretical repercussions are fundamental. Turing's approach of seeing artificial intelligence as an imitation of human behavior was not as practical over time and the dominant approach has been that of rational behavior, similarly, in the field of aeronautics it was sidelined. the approach of trying to imitate birds and the approach of understanding the rules of aerodynamics was taken. Although of course, the approach to human behavior and human thought continue to be studied by cognitive science and continue to provide interesting results to artificial intelligence, and vice versa.

Disciplines on which it is based

Science is not defined, but recognized. For the evolution of artificial intelligence the two most important forces were mathematical logic, which developed rapidly at the end of the XIX century, and new ideas about computing and advances in electronics that led to the construction of the first computers in 1940. They are also a source of artificial intelligence: philosophy, neuroscience and linguistics. Mathematical logic has continued to be a very active area in artificial intelligence. Even before the existence of computers with deductive logic systems.

Origins and chronological evolution

Old mathematical games, such as the Tower of Hanoi, show interest in finding a solution mode, capable of winning with the fewest possible moves.

About 300 B.C. C., Aristotle was the first to describe in a structured way a set of rules, syllogisms, that describe a part of the functioning of the human mind and that, by following them step by step, produce rational conclusions from given premises.

In 250 B.C. C. Ctesibio de Alejandría built the first self-controlled machine, a regulator of the flow of water that acted by modifying its behavior "rationally" (correctly) but clearly without reasoning.

In 1315, Ramon Llull had the idea that reasoning could be carried out artificially.

In 1847 George Boole established propositional (Boolean) logic, much more complete than Aristotle's syllogisms, but still somewhat weak.

In 1879 Gottlob Frege extended Boolean logic and obtained First Order Logic which has a greater power of expression and is used universally today.

In 1903 Lee De Forest invented the triode, also called a vacuum tube or tube.

In 1936 Alan Turing published a highly influential paper on "Calculable Numbers", a paper that laid the theoretical foundation for all computer science, and which can be considered the official origin of theoretical computing.. In this article he introduced the concept of the Turing Machine, an abstract mathematical entity that formalized the concept of an algorithm and turned out to be the forerunner of digital computers. He could conceptually read instructions from a punched paper tape and perform all the critical operations of a computer. The article set the limits of computer science because it showed that it is not possible to solve problems with any type of computer. With the help of his machine, Turing was able to demonstrate that there are unsolvable problems, of which no computer will be able to obtain its solution, for which reason he is considered the father of the theory of computability.

In 1940 Alan Turing and his team built the first electromechanical computer and in 1941 Konrad Zuse created the first programmable computer and the first high-level programming language Plankalkül. The next most powerful machines, although with the same concept, were the ABC and ENIAC.

In 1943 Warren McCulloch and Walter Pitts presented their model of artificial neurons, which is considered the first work in the field of artificial intelligence, even though the term did not yet exist.

1950s

In 1950 Turing consolidated the widely dispersed field of artificial intelligence with his paper Computing Machinery and Intelligence, in which he proposed a concrete test to determine whether or not a machine was intelligent, his famous Test of Turing for what is considered the father of artificial intelligence. Years later, Turing became the champion of those who defended the possibility of emulating human thought through computing and co-authored the first program to play chess.

In 1951 William Shockley invented the junction transistor. The invention made possible a new generation of much faster and smaller computers.

In 1956 the term "artificial intelligence" in Dartmouth during a conference called by John McCarthy, attended by, among others, Minsky, Newell and Simon. At this conference, triumphalist ten-year forecasts were made that were never fulfilled, which caused the almost total abandonment of research for fifteen years.

1980s

In 1980 history repeated itself with the Japanese challenge of the fifth generation, which gave rise to the rise of expert systems but did not achieve many of its objectives, so that this field suffered a new interruption in the 1990s.

In 1987 Martin Fischles and Oscar Firschein described the attributes of an intelligent agent. By attempting to describe with greater scope (not just communication) the attributes of an intelligent agent, AI has expanded into many areas that have created huge and differentiated branches of research. These attributes of the intelligent agent are:

  1. It has mental attitudes such as beliefs and intentions.
  2. He has the ability to gain knowledge, that is, to learn.
  3. It can solve problems, even decompose complex problems in others simpler.
  4. Capable of more complex operations.
  5. You understand. It has the ability to give meaning, if possible, to ambiguous or contradictory ideas.
  6. Plan, predict consequences, evaluate alternatives (such as chess games)
  7. He knows the limits of his own skills and knowledge.
  8. It can distinguish despite the similarity of situations.
  9. It can be original, creating even new concepts or ideas, and even using analogies.
  10. You can generalize.
  11. It can perceive and model the outside world.
  12. You can understand and use language and symbols.

We can then say that AI possesses human characteristics such as learning, adaptation, reasoning, self-correction, implicit improvement, and modular perception of the world. Thus, we can no longer speak of just one objective, but of many, depending on the point of view or utility that can be found for the AI.

1990s

In the 90s, intelligent agents emerged as the years went by and that evolved.

2000s

The Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) program won the Loebner Award for Most Human Chatbot in 2000, 2001, and 2004, and in 2007 the Ultra Hal Assistant program won the award.

2010s

Many of the researchers on AI maintain that «intelligence is a program capable of being executed independently of the machine that executes it, computer or brain» won you the Loebner prize. Some free artificial intelligence programs are Dr. Abuse, Alice, Paula SG, Virtual woman millenium.

2011: An IBM computer won the 'Jeopardy!' The IBM Watson computer emerged victorious in its duel against the human brain. The machine won the Jeopardy! question-and-answer contest, broadcast by the American television network ABC, by beating the two best contestants in the history of the program. Watson beat them in the third round, answering questions that forced him to think like a person.

2014: a computer has successfully passed the Turing test, making an interrogator believe that it is a person who answers his questions in a contest organized in London by the University of Reading (United Kingdom). United). The computer, with the Eugene program developed in St. Petersburg (Russia), posed as a 13-year-old boy, and those responsible for the competition considered it to be a "historical milestone in artificial intelligence."

2016: a Google computer beat the world champion of the ancient game “Go”. A computer program developed by the British company Google DeepMind managed to beat, for the first time, a professional champion of the thousand-year-old game of Eastern origin Go. The challenge was enormous for a machine, since the strategy test is very complex.