Previous page
Next page

15.5.2.10 Artificial general intelligence will be achieved in supercomputers

I began this section on predictions for the next 15 years by stating a few things that would not happen. Among other things, I said: “Artificial Intelligence systems capable of human-level dialog won’t exist on your smart devices and game machines in this time period.” While I believe that statement to be true, I also believe that by 2030 we will have finally achieved artificial general intelligence through human brain simulation on supercomputers. In other words, an AI capable of passing the Turing Test will exist, but it will take the full resources of one of the world’s fastest machines of the day to carry out the human brain simulation underlying that intelligence.

For nearly two decades prominent futurist Ray Kurzweil[20]has been predicting the emergence of exascale supercomputers capable of one billion billion computations per second (one exaflop) by 2020 running real time human brain simulations to enable full scale human AI by 2029. These predictions, which were once considered by most to be somewhere between fringe science and science fiction have now, at least in part, become mainstream science backed by government initiatives in the US and Europe with billions of research dollars promised over the next decade. Given this commitment, together with the progress being made understanding and simulating regions of the brain, and the long-term stability of Moore’s Law that I have personally witnessed over my lifetime, I am willing to go on record saying that I believe Kurzweil is correct in his prediction that artificial general intelligence through human brain simulation will succeed by 2030.

The human brain contains about 100 billon (1 X 1011) neurons or individual brain cells. Individual neurons can directly communicate with anywhere from hundreds to thousands other neurons by sending electro-chemical signals across the tiny gaps between neurons called synaptic gaps or synapses. Thus, the total number of synapses in the human brain is generally estimated to be somewhere between 100 trillion (1 X 1014) to 1 quadrillion (1 X 1015). These numbers together with results from early brain research projects, such as The Blue Brain Project which successfully simulated a small part of the rat brain, have led researchers to estimate that real time neural simulation of the entire human brain will require about 1 to 10 ExaFLOPS of computing power – in other words between 1 billion billion (1 X 1018) and 10 billion billion (1 X 1019) computations per second. (FLOPS stands for FLoating-point Operations Per Second and is a standard measure of computer speed based on the number of mathematical operations that can be processed each second.)

As mentioned in section 15.3.2, the fastest supercomputer in the world (as of June 2013) runs at 33.83 petaFLOPS, and the speed of such systems has historically doubled approximately every two years. Given the emphasis on supercomputing and the worldwide competition for ‘bragging rights’ to the country that has the fastest machine, supercomputing experts expect a computer capable of 1 or more exaFLOPS by 2020.

In addition to the progress being made in constructing the necessary hardware to support human brain simulation, work on understanding the brain itself is proceeding apace.[21]In Europe, Dr. Henry Markram’s Human Brain Project, the successor to his highly successful Blue Brain Project, was funded by the European Union in January 2013 for up to $1.3 billion over a ten year period. In the United States, President Obama announced the BRAIN initiative in April 2013 with a projected budget of $3 billion over a ten year period. These two decade long projects focus on understanding the brain and ultimately building real time human brain simulations.

Thus, it looks increasingly likely that within 10 to 15 years we will have both the supercomputer hardware and the brain simulation software to implement a real time neural simulation of the human brain. Such a simulation should “appear” intelligent and be capable of passing the Turing Test – though it may take several years to do so as the simulation will probably need to be taught to use language in the same way that human children must be taught to speak.

When humans achieve the goal of building an artificial general intelligence we will have accomplished an amazing feat, at least as significant in human history as Man’s first steps on the Moon. Even so, the first exascale computer running a human brain simulation will fill a large room and is projected to consume on the order of 20 to 30 megawatts of power. This compares to your brain which fits in your skull and consumes the equivalent of about 20 watts of power. In other words, even if these projects succeed our brains should still be about a million times more power efficient than a simulated brain running on a supercomputer.

I will close this section by noting that human brain simulation is only one possible path to achieving artificial general intelligence and in some ways the least interesting. What I mean by this is that successfully copying a system that already produces intelligent behavior (the human brain) does not mean we will necessarily understand how that system produces intelligent behavior. It would be far more satisfying to ‘engineer’ intelligence directly. This is, in fact, the direction that AI research has been pursuing for well over half a century now.

Even though progress in AI has not been anywhere near as fast as researchers had originally hoped and expected, when one considers that today’s fastest supercomputer is no more than 3% as powerful as the human brain, and our servers and mobile smart devices are just a tiny fraction of 1% as powerful as the human brain, the fact that we have made any measurable progress at all in artificial intelligence up to now is rather amazing. It seems only reasonable that problems, which are today totally intractable to engineered solutions, such as fluency in human languages and common sense reasoning, will surely fall when sufficient computational resources can be brought to bear.


Footnotes

[20] And why should we listen to this guy? Ray Kurweil is considered the “father” of modern optical character recognition software (giving computers the ability to “read” printed text), text-to-speech synthesis technology (giving computers the ability to convert text into spoken words), and automated speech recognition technology (giving computers the ability to understand spoken languages).

[21] According to Horst Simon, the Deputy Director of the Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC) speaking in May 2013, our best brain simulations are at 4.5% of human scale, running at 1/83 real time speed. http://www.extremetech.com/computing/155941-supercomputing-director-bets-2000-that-we-wont-have-exascale-computing-by-2020

Return to top