Technological singularity
The technological singularity is the hypothetical advent of artificial general intelligence (also known as strong AI). Technological singularity implies that a computer, computer network, or robot may be capable of recursively improving itself, or in designing and building computers or robots better than itself. It is said that repetitions of this cycle would likely lead to a runaway effect—an intelligence explosion—where intelligent machines could design generations of successively more powerful machines. The creation of intelligence would be far superior to human brainpower and control.
Origin
The first use of the term "singularity" was made in 1957 by the Hungarian mathematician and physicist John von Neumann. That year, in a conversation with von Neumann, Stanislaw Ulam described the ever-accelerating advances in technology and the changes in the way of human life. He gives the appearance that an essential singularity in history beyond human affairs is approaching, and as we know them:.
The ever faster technological progress and changes in the way of human life, give the appearance that some essential singularity is approaching in the history of the human race beyond their own affairs as we know them, cannot follow.
Ray Kurzweil quoted von Neumann's use of the term in a foreword by von Neumann himself to the classic The Computer and the Brain.
Years later, in 1965, I.J. Good first wrote of an "intelligence explosion," which suggests that if machines could slightly surpass human intellect, they could improve their own designs in ways unpredictable to their designers, and thus recursively augment themselves. themselves making them much smarter. The first of those improvements may be small, but as the machine gets smarter, it could lead to a cascade of self-improvement and a sudden increase in superintelligence (or singularity).
But it is not until 1983 when the term is popularized by the mathematician and writer Vernor Vinge, who argues that either artificial intelligence, human biological enhancement, or brain-computer interfaces could be the possible causes of the singularity. He greatly popularized the notion of a Good intelligence explosion in a series of writings, first addressing the subject in print in the January 1983 issue of Omni magazine. In this opinion piece, Vinge appears to have been the first to use the term "singularity" in such a way that he was tied specifically to the creation of intelligent machines. In writing:
We'll soon create higher intelligences than ours. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the space-time nested in the center of a black hole, and the world will go far beyond our understanding. This singularity, I think she's already being pursued by a series of science fiction writers. This makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century, therefore, a nuclear war is needed in the middle... so that the world remains intelligible.
History
Although the first use of the term was in 1957, many years ago various scientists and authors talked about this event, albeit unconsciously, without really realizing that what they said would later be called a singularity.
His beginnings
One of the first people to claim the existence of a singularity was Frenchman Nicolas de Condorcet, an 18th-century French mathematician, philosopher, and revolutionary. In his Sketch for a Historical Picture of the Progress of the Human Mind of 1794, Condorcet states:
Nature has not set a time for the perfection of human faculties; that the perfectibility of man is truly indefinite; and that the progress of this perfectibility, from now on, is independent of any power that might wish to stop it, has no other limit than the duration of the world in which nature has cast us. This progress will undoubtedly vary in velocity, but the time in which the earth occupies its present place in the universe system will never be reversed, and as long as the general laws of this system produce neither a general cataclysm nor changes such as the will to deprive mankind of its current faculties and current resources."
Years later, editor R. Thornton wrote about the recent invention of a four-function mechanical calculator:
... This type of machines, through which the scholar can, turning a crank, grind the solution of a problem without the tiredness of mental application. What would make her introduction to schools is to cause incalculable damage. But who knows that such machines are brought to a greater perfection, cannot think of a plan to remedy all their flaws and then grind ideas beyond the reach of the mortal mind?
In 1863, writer Samuel Butler wrote "Darwin Among the Machines," which was later incorporated into his famous novel "Erewhon." He pointed to the rapid evolution of technology and compared it to the evolution of life, inadvertently using the term singularity:
It is necessary to reflect on the extraordinary advance that the machines have made over the last hundred years, taking into account the slowness with which the animal and plant kingdoms are advancing. The most highly organized machines are creatures not so much of yesterday, but from the last five minutes, so to speak, compared to the past time. Suppose for the sake of the argument that conscious beings have existed for some twenty million years: If we see what machines have done in the last thousand years. Can't it be that the world lasts 20 million more years? If so, what won't they become in the end? We cannot make calculations about the progress of the intellectual or physical powers of man, which will be a compensation against the measure of the greatest development that seems to be reserved for the machines.
In 1909, historian Henry Adams wrote an essay, "The Rule of Phase Applied to History," in which he developed a "physical theory" of history" by applying the law of inverse squares to historical periods, proposing a "Law of Acceleration of Thought". Adams interprets History as a process of moving towards a 'balance', and speculated that this process would be 'pushing thought to the limit of its possibilities in the year 1921. Quite possibly!' #34;, adding that "the consequences can be as startling as the change from water to steam, from worm to butterfly, or from radio to electrons." Futurologist John Smart has called Adams "First Earth Singularity Theorist".
This term can also be related to the mathematician Alan Turing, who in 1951 spoke of machines surpassing human beings intellectually:
Once the thinking method of the machine has begun, it would not take long to overcome our weak powers.... Therefore, at some point we would have to expect the machines to take control, in the way it is mentioned in Erewhon by Samuel Butler.
Through the years
In 1985, Ray Solomonoff introduced the notion of an "infinite point" on the time scale of artificial intelligence, analyzing the magnitude of the "future shock," 34;we can expect from our expanded AI in the scientific community", and about the social effects. Estimates were made "for when these milestones would occur, followed by some suggestions for the most effective use of the extremely rapid technological growth that is expected".
In his book Mind Children (1988), computer scientist and futurist Hans Moravec generalizes Moore's Law to make predictions about the future of artificial life. Moravec outlines a timeline and scenario along these lines in which robots will evolve into a new series of artificial species, around 2030 to 2040. In "Robot: Mere Machine to Transcendent Mind," published in 1998, Moravec further considers the implications of the evolution of robot intelligence, the generalization of Moore's law to technologies preceding the integrated circuit, and speculating on a coming & #34;fire in mind" of rapidly expanding superintelligence, similar to Vinge's ideas.
A 1993 article by Vinge, "The Coming Technological Singularity: How to Survive in the Post-Human Era", spread widely on the Internet and helped popularize the idea. This article contains the oft-quoted statement, "Thirty years from now, we will have the technological means to create superhuman intelligence." Soon after, the human era will end." Vinge refines his estimate of the necessary timescales, adding: "I would be surprised if this event occurs before 2005 or after 2030."
Damien Broderick's popular science book The Spike (1997) was the first to investigate the technological singularity in detail.
In 2005, Ray Kurzweil published “The singularity is near”, which brought the idea of the singularity to the popular media, both through the accessibility of the book, and through an advertising campaign that included an appearance on "The Daily Show with Jon Stewart". The book stirs intense controversy, in part because Kurzweil's utopian predictions stand in stark contrast to other, darker views of the possibilities of the singularity. Kurzweil. His theories and the controversies surrounding him were the subject of Barry Ptolemy's documentary “Transcendent Man”.
In 2007, Eliezer Yudkowsky suggested that many of the various definitions that have been assigned to "uniqueness" they are incompatible with each other rather than mutually supportive. For example, Kurzweil extrapolates current technological trajectories beyond the advent of self-enhancing AI or superhuman intelligence, which Yudkowsky argues represents a tension with what I.J. Good proposes; discontinued on intelligence and Vinge's thesis of unpredictability.
In 2008, Robin Hanson (taking as a "singularity" the sharp increase in the exponent of economic growth) listed the agricultural and industrial revolutions as past singularities. The extrapolation of these past events; Hanson proposes that the next economic singularity should increase economic growth by between 60 and 250 times. An innovation that would allow the virtual replacement of all human labor and that could trigger this event.
In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the creation of Singularity University, whose stated mission is to "educate, inspire and empower leaders to apply exponential technologies to cope with to the great challenges of humanity." Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders from Singularity University, it is based at NASA Ames Research Center in Mountain View, Calif. The nonprofit organization runs an annual ten-week graduate program during the Northern Hemisphere summer covering ten different technologies and allied tracks, and a number of executive programs throughout the year.
Throughout the years people kept talking about it. In 2010 Aubrey de Gray applied the term "Methuselarity" to the point where medical technology improves so rapidly that the expected human lifespan increases by more than one year per anus. In " Apocalyptic AI – Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality" (2010), Robert Geraci offers an account of the development of "cyber-theology", inspired by Singularity studies.
The essayist and philosopher Éric Sadin, spoke in his text in 2017 about a human-machine coupling, describing it as something unprecedented between physiological organisms and digital codes. This coupling is woven and unsteadily tensed between skills and missions granted to humans on the one hand, and to machines on the other. Sadin tells us that it has been characterized, up to now and for some time, by an uncertain and nebulous balance, based on the binary and emblematic distribution in the frequentation of all Internet flows, the vast majority operated by autonomous electronic robots.
Predictions
Some authors have dedicated themselves to predicting when and how this singularity might occur.
Albert Cortina and Miquel-Ángel Serra
According to the authors, the technological singularity will cause unimaginable social changes, impossible to understand or predict by any human being. In this phase of evolution, the fusion between technology and human intelligence will take place. Technology will dominate the methods of biology, leading to an era in which the non-biological intelligence of posthumans will prevail, spreading throughout the universe.[citation needed]
Raymond Kurzweil
On the one hand, this scientist and inventor in his book The singularity is near, warns us that this phenomenon will occur around the year 2045 and predicts a gradual ascent to the singularity. That moment when it is expected that the intelligences based on the computing significantly exceed the sum total of human brainpower, writing of advances in computing before then "is not going to represent the Singularity" because they don't "to correspond to a profound expansion of our intelligence." In 2011, Kurzweil told Time magazine: "We will reverse the success of engineering the human brain in the middle of the 2020s. By the end of the 2020s, computers will be capable of human-level intelligence".
Vernor Vinge
On the other hand Vernor predicts it in 15 years less, in 2030. This differs entirely from Kurzweil as he predicts a gradual ascent to the singularity, rather than a rapid self-enhancement of Vinge's superhuman intelligence. For him there are four ways in which the singularity could occur:
- The development of teams that reach consciousness ("repeated") and possess superhuman intelligence.
- Large computer networks (and their associated users) can "awaken" as an intelligent superhuman entity.
- Human/computer interfaces can become so intimate that users can reasonably be considered superhumanly intelligent.
- Biological science can find ways to improve natural human intellect.
Vinge goes on to predict that superhuman intelligences will be able to improve their own minds faster than their human creators. "When units of higher human intelligence progress," Vinge postulates, "progress will be much faster." He predicts that this feedback loop of self-enhanced intelligence will lead to vast amounts of technological advancement within a short period of time, and claims that the creation of superhuman intelligence represents a breakthrough in humans' ability to shape their future. His argument is that the authors cannot write realistic characters that surpass the human intellect, since it goes beyond the capacity of human expression. In other words: We cannot think beyond our capacity and way of thinking. Vinge names this event "the Singularity."
Summit of the Singularity[citation needed] At the Singularity Summit in 2012, Stuart Armstrong did a study on artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a mean value of 2040. Discussing the level of uncertainty in the AGI estimates, Armstrong said in 2012: "It's not entirely formal, but my current estimate of 80% is something like five to 100 years from now."
Demonstrations
Many authors defined that the technological singularity can be manifested in different ways: from artificial intelligence, superintelligence or the singularity of Non-AI.
Intelligence Explosion
Strong AI could lead to what Irving John Good puts it, an "intelligence explosion." Although technological progress has accelerated, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, had a significant change over the past few millennia. Through the increasing power of computers and other technologies, it might be possible to build a machine more intelligent than humanity. The next step involves a superhuman intelligence—invented either through amplification of human intelligence or through artificial intelligence—capable of bringing greater problem solving and inventive abilities, and designing a better, smarter machine (recursive self-improvement). These iterations of recursive self-improvement could speed up, and potentially enable, enormous qualitative change before the upper limits imposed by the laws of physics or theoretical calculus were set.
Superintelligence
Many of the writers who discuss the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence. Although they argue that it is difficult or impossible for humans today to predict what life will be like for humans in a post-singularity world.
Vernor Vinge drew an analogy between the breakdown of our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity, beyond the event horizon. of a black hole. The author and other noted writers specifically state that without superintelligence, the changes that this event would produce would not qualify as a true singularity.
Uniqueness of Non-AI
Some authors use "singularity" in a broader way to refer to the radical changes in our society, caused by new technologies such as molecular nanotechnology. Many authors also link the uniqueness of the observations of the exponential growth of various technologies (with Moore's Law, which is the most prominent example). Using such observations as the basis for the prediction of the singularity, it is likely that it will happen sometime in the 21st century.
There is also the hypothesis that with regenerative medicine, prolonged hyperlongevity will be reached.
Luisimilitude
There is no unanimity among the experts about the total feasibility of this event.
Researcher Gary Marcus states that "virtually everyone in the AI field believes that machines will one day surpass humans and at some level, the only real difference between enthusiasts and skeptics is a time frame." ». But on the other hand, a host of technologists and academics claim that the technological singularity is on hold, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's law is often cited in support of the concept.
Probable cause: exponential growth
The exponential growth of computer technology suggested by Moore's Law is commonly cited as a reason to expect a singularity in the relatively near future. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended again through pre-integrated circuit computing technologies. Futurist Ray Kurzweil postulates a law of accelerating returns, in which the rate of technological change exponentially increases the generalization of Moore's law just as in Moravec's proposal. To this is added the technology of materials specifically applied to nanotechnology, medical technology and others.
Between 1986 and 2007, the application-specific capacity of machines to compute information per capita has doubled every 14 months; the per capita capacity of general purpose computers in the world has doubled every 18 months; world telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.
Like other authors, Kurzweil reserves the term "singularity" to define a rapid increase in intelligence, unlike other technologies such as writing: "The Singularity will allow us to transcend the limitations of our biological bodies and brains."
There will be no distinction, after the Singularity, between human and machine". It is believed that the "design of the human brain, while not simple, is nonetheless a trillion times more simpler than it seems, due to massive redundancy". According to Kurzweil, the reason the brain has a messy and unpredictable quality is because, like most biological systems, it is a "fractal probabilistic".
Accelerated change
Some proponents of the singularity argue for its inevitability by extrapolating from past trends, especially those concerning the narrowing gap between enhancements and technology.
Hawkins (1983) writes that "mindsteps," dramatic and irreversible shifts to paradigms or worldviews, are accelerating at the rate quantified in his equation for mindsteps. He cites the inventions of writing, mathematics, and computing as examples of such changes.
Kurzweil's analysis of the Law of Accelerating Returns identifies that whenever technology approaches a barrier, new technologies will overcome it. Presumably, a technological singularity would lead to the rapid development of a Kardashev Type I civilization, which has achieved dominance over the resources of its home planet.
Frequently cited hazards include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for singularity advocates and critics, which were discussed in Bill Joy's Wired magazine article "Why the Future Doesn't Need Us" (“Why the future doesn’t need us”)
The Acceleration Studies Foundation (in Spanish, Fundación de Estudios de Aceleración), a non-profit educational foundation founded by John Smart, dedicated to outreach, education, research and advocacy related to the acceleration of change. In it the conference on the acceleration of change (Accelerating Change) is produced at Stanford University. In addition, he maintains the Acceleration Watch educational site.
Recent advances, such as the mass production of graphene using modified kitchen mixers (2014) and high-temperature superconductors based on metamaterials, could enable supercomputers that use as much power as a Core I7-type microprocessor (45W) and could achieve the same computing power as the IBM Blue Gene/L system.
Opposition
Several critics strongly disagree about the singularity event, stating that no computer or machine could reach human intelligence.
Canadian scientist Steven Pinker stated in 2008:
«(...) There is no minimum reason to believe in a singularity that comes. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at the vaulted cities, jet-pack displacements, cities under water, tall miles buildings, and nuclear propulsion cars – all were futuristic fantasies when I was a child who never arrived. The pure processing power is not a magical dust that magically solves all your problems. (...)»
On the other hand, German scientist Schmidhuber (2006) suggests differences in memory for recent and distant events to create an illusion of accelerating change, and that such phenomena may be responsible for doomsday predictions of the past.
About the economy
Futurist researcher Martin Ford postulates that before the singularity certain routine jobs could be automated, causing mass unemployment and falling consumer demand, thus destroying the incentive to invest in technologies necessary for the singularity to take place.
American writer Jaron Lanier also talks about technology, refuting the idea that singularity is inevitable. He postulates that technology is creating itself and is not an anonymous process. He claims that the only reason to believe in human action on technology is so that you can have an economy where people earn their own way and invent their own lives; to structure a society without emphasizing individual human action would be tantamount to denying influential people dignity and self-determination. "Singularity as a celebration of bad taste and bad politics".
In The Progress of Computing, American economist William Nordhaus argued that until the 1940s, computers continued to grow slowly. of traditional industrial economics, thus rejecting extrapolations of Moore's law to 19th century computers.
Andrew Kennedy, on the other hand, in his 2006 article for the British Interplanetary Society discussing the change and growth in speeds of space travel, stated that while long-term global growth is inevitable, it is small. and incorporates ups and downs. He further noted that "new technologies follow known laws of energy use and dissemination information and are required to connect with existing ones." Remarkable theoretical discoveries, if they end up being fully utilized, would manage to play their role in maintaining the growth rate: and they don't make their plotted curve...redundant". He claimed that exponential growth is not a predictor of its own, and he illustrates this with examples such as quantum theory.
Technological throwback
Geographer Jared Diamond talks about a technological regression that will make the singularity impossible. He argues that when self-limiting cultures exceed the sustainable carrying capacity of their environment, consumption of strategic resources (wood, soil, or water) will create repetitive deleterious positive feedback that would ultimately lead to social collapse. Physicists Theodore Modis and Jonathan Huebner add to the technological retrogression. They admit that the invocation of technology is increasing but is decreasing due to the rise in computer clock frequencies that is lagging behind Moore's prediction in in that the density of the circuit continues to be maintained with an exponential increase. This is due to excessive heat buildup from the chip, which cannot dissipate fast enough to prevent the chip from melting when operating at high speeds. Speed advances may be possible in the future, by virtue of more energy-efficient CPU designs and multi-cell processors. While Kurzweil utilizes Modis' resources and labors, around accelerating change, Modis has distanced himself. of Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.
Finally, American businessman Paul Allen argues the opposite about accelerated returns, specifically referring to the "Complexity brake". He claims that the most advanced science towards understanding makes intelligence becomes more difficult, and it would be to make further progress. A study on the number of patents shows that human creativity does not show accelerated returns, as suggested by Joseph Tainter in his article "The Collapse of Complex Societies", a law of returns decreasing.
The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining ever since. The growth in complexity over time becomes self-limiting, and leads to a general "collapse." systems overview".
Against Kurzweil
In addition to general criticisms of the singularity concept, several critics have raised issues against what the scientist Raymond Kurzweil claimed to be a singularity. One line of criticism is that a log-log plot of this nature is inherently biased towards a straight-line result. Others identify a selection bias in the points that Kurzweil chooses to use. A clear example of this was given by the biologist PZ Myers who points out that many of the first "events" evolutionary events were stopped arbitrarily. Kurzweil has refuted this, plotting evolutionary events from 15 neutral sources, and showing that they fit in a straight line on a log-log plot. The Economist scoffed at the idea with a graph extrapolating the number of blades in a razor, which has increased in recent years from one to five, and will increase faster and faster to infinity.
Other singularities
On the other hand, several authors propose other "singularities" through the analysis of world population trends, world gross domestic product or longevity, among other indices. Andrey Korotayev and others argue that hyperbolic growth curves in history can be attributed to feedback loops that stopped affecting global trends in the 1970s, and that hyperbolic growth should not be expected in the future.
Microbiologist Joan Slonczewski and writer Adam Gopnik argue that the Singularity is a gradual process; that as humans we gradually outsource our ability to machines, that we redefine those abilities as inhuman, not realizing how little is left. This concept is called Mitochondrial Singularity. The idea refers to mitochondria, the organelles that develop from autonomous bacteria, but now control our living cells. In the future, the "human being" inside the machine's exoskeleton it could exist just to power it up.
Risks
The technological singularity event for some authors and critics brings many risks to society. In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, expressed concern about the potential dangers of the singularity.
Uncertainty
The term "technological singularity" it reflects the idea that such a change can occur suddenly, and that it is difficult to predict how the resulting new world would operate. Whether such an intelligence explosion would be beneficial or detrimental, or even an existential threat, is unclear. As the subject has not been addressed by artificial general intelligence researchers, despite the fact that the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Institute for the Singularity of Intelligence artificial (Singularity Institute for Artificial Intelligence), which is now the Machine Intelligence Research Institute.
Implications for human society
In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of computer scientists, artificial intelligence and robotics researchers at Asilomar in Pacific Grove, California. The objective was to discuss the possible impact of the hypothetical possibility that robots could become self-sufficient and capable of making their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what extent they might use such abilities to pose threats or risks.
Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. In addition, some computer viruses can evade removal and have achieved the "intelligence of the cockroach." Conference attendees noted that self-awareness as depicted in science fiction is probably unlikely, but that there are other potential risks and dangers.
Some experts and academics have questioned the use of robots for military combat, particularly when these robots are given some degree of autonomous functions. A United States Navy report indicates that, as military robots become become more complex, there should be greater attention to implications for their ability to make autonomous decisions. The AAAI has commissioned a study to examine this question, which points to programs such as the Language Acquisition Device., which is claimed to emulate human interaction.
Security Measures - Tech Friendly?
On the other hand, some support the design of friendly artificial intelligence, which means that advances already taking place with AI must also include an effort to make AI inherently friendly and human. By Isaac Asimov, “The Three Laws of Robotics” is one of the first examples of security measures proposed for AI. The laws are meant to prevent artificial intelligence robots from harming humans. In Asimov's stories, perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are simply acting for their best interpretation of their rules.
For example, in 2004 the Machine Intelligence Research Institute launched an Internet campaign called “3 Unsafe Laws” to raise awareness about the safety problems of AI and the inadequacy of Asimov's laws in particular..
In popular culture
The term singularity and the event that it brings could be seen in great movies, series, novels, and in many other formats. All of them addressed this event in some way in their stories.
Novels
In 1984, Samuel R. Delany uses "cultural fugue" as a plot device in his science fiction novel Stars in My Pocket Like Grains of Sand; the runaway term of technological and cultural complexity, in effect, destroys all life in any world in which a process little understood by the novel's characters transpires, and against those who seek a stable defense.
Neither more nor less, Vernor Vigne also popularized the concept in science fiction novels such as “Marooned in Realtime” (1986) and “A Fire Upon the Deep” (1992). The first is in a world of rapidly accelerating change that leads to the appearance of more and more sophisticated technologies separated by ever shorter time intervals, until a point beyond human comprehension is reached. The latter begins with an imaginative description of the evolution of a superintelligence accelerating exponentially in stages of development that end in a transcendent power, almost omnipotent and unfathomable by mere humans. Vinge also implies that development cannot stop at this level.
In the series of stories by the American writer Isaac Asimov related to Multivac, a supercomputer, and particularly in his short story The Last Question, he states that it is capable of self-correcting more efficiently and faster than what he thought. they can make human technicians, dispensing more and more of them, and eventually developing the ability to design and manufacture its successor from the previous one. It reaches a point where it merges with all human minds and is a single will: AC.
James P. Hogan's 1979 novel “The Two Faces of Tomorrow” is an explicit description of what is now called the Singularity. An artificial intelligence system solves a moon digging problem in a brilliant new way, but almost kills a crew in the process. Realizing that systems are becoming too sophisticated and complex to predict or manage, a scientific team sets out to teach a sophisticated computer network how to think more humanly. The story documents the rise of consciousness in the computer system, the humans' loss of control, and the failed attempts to shut down the experiment in which the computer desperately defends itself, and the computer's intelligence reaches maturity.
Discussing the growing recognition of the singularity, Vernor Vinge wrote in 1993 that "it was science fiction writers who felt the first concrete impact". In addition to his own short story "Bookworm, Run," whose protagonist is a chimpanzee with increased intelligence by a government experiment, he cites Greg Bear's novel "Blood Music" (1983) as an example of the singularity. in fiction.
The 1996 novel “Holy Fire” by Bruce Sterling explores some of those themes and postulates that a Methuselarity will become a gerontocracy.
In William Gibson's 1984 novel “Neuromancer”, artificial intelligences capable of improving their own programs are strictly regulated by special "Turing police" to ensure that they do not exceed a certain level of intelligence, and the plot centers on the efforts of one of the AI to elude their control.
In Harlan Ellison's short story “I Have No Mouth, and I Must Scream” (1967), a malevolent AI achieves omnipotence.
Movies
Popular movies in which computers become intelligent and try to dominate the human race include Colossus: The Forbin Project; the Terminator series; The Matrix series; Transformers, the film by Stanley Kubrick and Arthur C. Clarke, 2001: A Space Odyssey, among many others.
The television series Dr. Who, The 100, Battlestar Galactica, and Star Trek: The Next Generation (which also delves into virtual reality, cybernetics, alternate life forms, and humanity's possible evolutionary path) also explores these themes. Of all of them, it only has a true super-intelligent colossus.
- The Machine. By writer-director Caradog James, he follows two scientists who create the first self-conscious artificial intelligence in the world during the Cold War.
- Trascendence. From Wally Pfister, his entire plot is focused on a unique deployment scenario in which they manage to transfer the consciousness of a human and upload it to the network.
- Her. The 2013 science-fiction film follows a romantic relationship of a man with a very intelligent AI, who eventually learns to improve himself and creates an explosion of intelligence.
- Blade Runner and Blade Runner 2049. The adaptation to the cinema of Philip K. Dick’s novel, “Do Androids Dream of Electric Sheep?”.
- Ex Machina and Tron explore the concept of the genesis of the thinking machines and their relation to the impact on humanity.
- Accelerating. From Charles Stross, accelerating the advance of technology is the central theme of this film.
- Me, Robot. Vaguely based on stories of robots from Asimov, an AI computer tries to take full control over humanity with the purpose of protecting humanity from itself, due to an extrapolation of the Three Laws.
- Alien: Covenant. A film that deals with a journey to a remote planet in the galaxy. In history a new android is activated, which is an artificial intelligence. On the journey they discover that they are not alone on that planet, and their goal begins to be to try to live in that dangerous and hostile environment.
Others
The documentary Transcendent Man, based on The Singularity Is Near, covers Kurzweil's quest to reveal what he believes to be the fate of humanity. Another documentary, Plug & Pray, focuses on the promise, problems, and ethics of artificial intelligence and robotics, with Joseph Weizenbaum and Kurzweil as the film's main subjects. A 2012 documentary titled simply The Singularity covers both futuristic and counter-futuristic perspectives.
In music, the album The Singularity (Phase I: Neohumanity) by Swedish band Scar Symmetry is the first part of a three-part concept album based on the events of the Singularity.
There is the web comic “Questionable Content”, which takes place in a post-singularity world with a "friendly AI".
Authors
Notable authors who address issues related to the singularity include Robert Heinlein, Karl Schroeder, Greg Egan, Ken MacLeod, Rudy Rucker, David Brin, Iain M. Banks, Neal Stephenson, Tony Ballantyne, Bruce Sterling, Dan Simmons, Damien Broderick, Fredric Brown, Jacek Dukaj, Stanislaw Lem, Nagaru Tanigawa, Douglas Adams, Michael Crichton, and Ian McDonald.
Contenido relacionado
SEAT
Video games development
CAD