Moore's Law & the Future of AI
The neuromorphic supercomputer known formally as SpiNNaker touting 500,000 processing cores has been active in the United Kingdom since 2016. But several months back, SpiNNaker was upgraded to be twice as powerful. SpiNNaker stands for ‘Spiking Neural Network Architecture’ and was built to simulate select regions of the brain for study. The new million-core system has broad application for Neuroscience and is an impressive achievement, however its maximum computational power is still only a single percent of human intelligence.
The human brain contains a network of 100 billion neurons, whereas SpiNNaker’s hypothetical maximum operation capability is 1 billion functional neurons. The researchers associated with the project have yet to report running any model approaching that limit, so far the machine has peaked at an 80,000 neuron segment of the human cerebral cortex. But to put the cognitive-scale of 1 billion neurons in perspective, dogs possess only 1.6 · 108 (160,000,000) neurons, while 3 · 108 can be found in the brain of a cat.
Recall that Moore’s law states the number of transistors contained on a microchip will (on average) double every 18 to 24 months. Or, stated mathematically:
FV = PV ert ≈ PV e2(1.5yr)
While a machine capable of simulating only 1% of the human brain may not seem very impressive outside of it's research application, Moore’s law implies something different. If the exponential growth transistor chips have experienced over the last half-century were to continue, we could expect the emergence of a machine tantamount to human intelligence in less than 10 years.
In fact, if one assumes the complexity of artificial neural networks can be scaled directly in proportion to the number of available transistors, we can calcuate how many years we are away from a system with the same number of processing-cores as SpiNNaker potentially breaching the threshold of 100 billion neurons:
t = [ln(FV/PV)] / r = [ln(1011/109)] / (2/18) = 3.45 yr
Startling as this may be, the application of Moore’s law to the traditional microprocessor is slated to collapse by 2020. In other words, its likely that the exponential growth in transistors will cease before a machine capable of simulating an organism as complex as the human mind could be economically achievable. This is because as transistors grow increasingly small, eventually they reach a material limit where the effects of quantum mechanics take control.
When our current silicon chips finally scale down to the thickness of a single atom, quantum mechanics will render it impossible to keep track of an electron. This is a consequence of Heisenberg’s Uncertainty Principle and researchers in Silicon Valley are frantically trying to evade its fallout.
Intel has funded research in 3D chip manufacturing (recently on the floor at CES 2019), while Google, IBM, and many others are investing heavily in quantum computers for industrial applications. Though so far there have been only minor advances in such technologies.
Still, the computational power required to simulate all 100 billion neurons is in principle hindered only by cost.
In 2014, Stanford Bioengineer Kwabena Boahen estimated a cost reduction of their neuromorphic computer Neurocore to an impressive $400 per 1,048,576 neuron-capable board. That tabulates to a $38,146,973 minimum price tag for a 100 billion neuron supercomputer.
Proceed to the Next Post:
The Physics Behind Star Wars' Lightsabers
Comments
Post a Comment