Previous page

15.5.3.3 Ray Kurzweil’s singularity will not take place by 2050, or even by 2100

As discussed above, scientists are beginning to converge on an estimate of 1 to 10 exaFLOPS as the amount of raw processing power needed to simulate a human brain in real time. Some, such as Ray Kurzweil, believe that human level cognition could eventually be accomplished on less powerful machines. Kurzweil estimates that 10 petaFLOPS, 1/100 of an exaFLOPS, should be sufficient.[22] Amazingly, since our fastest supercomputers now run at over 33 petaFLOPS (as of 2013), according to Kurzweil we should already possess the necessary supercomputer hardware to support an artificial general intelligence. If Kurzweil’s estimate is correct, why haven’t we already constructed an AI? The answer is simple, we haven’t yet figured out how to generate the appropriate software.

For many years, Kurzweil has used the year 2029 as his target for when we will achieve human level intelligence in a machine – a machine capable of passing the Turing Test. When he first came out with these predictions, most ‘reputable’ scientists gently shook their heads and smiled. But that is changing rapidly.

Regardless of whether 10 petaFLOPS, or 10 exaFLOPS, or something in between turns out to be the magic number necessary to support human level artificial intelligence, we appear to be quite close to achieving the necessary hardware. Additionally, given recent progress in artificial vision (Kinect), speech recognition and generation (Siri, Google Now), deep question answering (IBM’s Watson), and ongoing initiatives in the US and Europe to understand how the human brain works, the software component finally seems to be falling into place. Thus, Kurzweil’s estimate for achieving AGI (artificial general intelligence) by 2029 now seems reasonable to many knowledgeable individuals.

Achieving artificial general intelligence, however, is not Kurzweil’s most famous, nor most controversial, prediction. Essentially Kurzweil believes that the exponentially doubling trend in computer power will continue indefinitely – and this leads him to make an astonishing prediction: By 2045 he predicts a “technological singularity” will occur.

The singularity is a point in time at which the rate of technological change will become so rapid that unenhanced humans will no longer be able to comprehend what is happening. Kurzweil believes that humans (or at least a significant portion of the human population) will choose to merge with super intelligent AIs leading to a post-singularity world of near immortal, near God-like beings (Fig. 15.19, 15.20).

Figure 15.19 Trailer for the film “Transcendent Man” an excellent documentary about the life of Ray Kurzweil and his ideas concerning the Singularity

Figure 15.20 Very brief summary of Kurzweil’s “Six Epochs of Evolution” by Jason Silva

Obviously, this is a controversial idea that many dismiss as a form of religion – “rapture of the nerds” is a phrase that is sometimes heard.

But, let’s look at this prediction in a bit more detail, taking for the sake of argument that we will have achieve one human AI equivalent in a supercomputer by 2030 – something that many now view as reasonable. IF the exponential increase in computing speeds continues at the rate of one doubling approximately every 2 years, in 20 years computing should have increased 1,000 fold. Thus, by 2050 a “laptop equivalent” (which runs about 1,000 times slower than a supercomputer) should possess one human level equivalent and a supercomputer should possess 1,000 human level equivalents. In another 20 years, by 2070, the laptop will support 1,000 human level equivalents and the supercomputer one million human level equivalents. By 2090, one million human level equivalents in the average laptop and a billion human level equivalents in the supercomputer.

Under such circumstances how long do you think humans would remain the dominant intellectual force on the planet? How long would we remain in control? Wouldn’t the only option be to “join” them?

While I do think many of the predictions that Kurzweil makes are reasonable and will come to pass over the course of the next 50 years – such as artificial general intelligence, greatly expanded human lifespans, and nanotech assemblers able to construct most physical goods (including food, clothing, and shelter) directly from raw materials at almost zero cost – I do not believe the singularity will occur during the 21st century.

My reason for this view is simple. Humans are not ready to ‘step aside’ as the premier species on the planet. Moore’s Law (and any successor that continues the exponential doubling of computing power) is not a physical “law”. This progress happens only because we humans work hard to make it happen. In my opinion no sane human would want his laptop to be a thousand times smarter than he is. So, we won’t build such things. We will begin to limit the advance of future computer systems – either their intelligence or their ‘desire for independence’ so that we humans remain firmly in control for the foreseeable future.

Eventually, at some point in the distant future, humans may decide that merging with intelligence systems to create super powerful merged human / AI hybrids is a desirable thing. These hybrids would presumably blend the creativity and insight of human beings with the exponentially increasing power of computing hardware to become a fundamentally different type of “post-human”. Thus, I’m not saying the singularity will never happen, just that it is not something that will occur any time “soon”.


Footnotes

[22]  The basic idea underlying this lower estimate is that your brain has lots of redundancy that we should be able to eventually eliminate. Once we figure out how various evolved systems function, we should be able to redesign them to function more efficiently.

Return to top