The technological singularity or simply the singularity is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a “Runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.
The first to use the concept of a “singularity” in the technological context was John von Neumann. Stanislaw Ulam reports a discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Subsequent authors have echoed this viewpoint.
Public figures such as Stephen Hawking and Elon Musk have expressed concern that full artificial intelligence (AI) could result in human extinction. The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.
Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability. Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.
The term “Technological Singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate. It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat. Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values.
The rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors. While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “Technological singularity”, claiming that it lacks scientific rigor.
Some intelligence technologies, like “Seed AI”, may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately. An AI rewriting its own source code could do so while contained in an AI box.
Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.
There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might not be invariant under self-improvement, potentially causing the AI to optimise for something other than what was originally intended. Secondly, AIs could compete for the same scarce resources humankind uses to survive.
While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support humankind to promote its own goals, causing human extinction.
Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called “computing overhang.”
Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”. Kurzweil believes that the singularity will occur by approximately 2045. His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.
Singularity and Security Risks
Since there is no direct evolutionary motivation for an AI to be friendly to humans, the challenge is in evaluating whether the artificial intelligence driven singularity will under evolutionary pressure promote their own survival over ours. The reality remains that artificial intelligence evolution will have no inherent tendency to produce or create outcomes valued by humans and there is little reason to expect an outcome desired by mankind from any super intelligent machine.
We humans are living a paradox as the achievements of artificial intelligence advances are shaping human ecosystems with more dangerous and more valuable opportunities than ever before. Whether promise or peril prevails will define and determine the future of humanity.
This article is not to make timeline predictions to a singularity but rather to begin the discussion on the troubling trajectory of artificial intelligence evolution for the future of humanity. Whether we believe that singularity is near or not, the very thought raises crucial security and risk questions for the future of humanity, forcing us to think seriously about what we want as a species.