top of page

Neuromorphic Computing Chipsets and Their Implementations with Reference to Intel’s Loihi

Shishir Mani Tripathi

Editorial Intern, The Indian Learning

Indian Society of Artificial Intelligence & Law


Introduction

Human brains are unique. They can process information at speeds greater than any of the advanced tech of 21st century, all while being extremely energy efficient. Man, in pursuit of replicating the human brain processing capabilities, developed computing systems and processing units called chips. Much of these chips in use today in many of the advanced computing systems are based on the Von Neumann Architecture. This, although commendable on its own, is still not as efficient or fast as the human brain. The current technology of CMOS (Complementary Metal Oxide Semiconductor) being used to produce integrated circuits is taxing. If data storage and communication continues to increase at the contemporary rate, the energy consumption required by CMOS processing units working on ANN (artificial neural network) will surpass 1027 Joules in 2040, exceeding the total energy being produced globally. Hence, low energy substitutes based on Non – Neumann Architecture are being explored.

Researchers believe that the answer lies in Neuromorphic computing chips or ‘silicon brains’. This involves arranging artificial neural networks in a way so as to form Spiking Neural Networks (SNN) to make them function like the human brains. This essentially means that, unlike the Neumann architecture, we can have the memory and processing units at the same place which would help in further improving the energy efficiency.

The potential in Neuromorphic computing is so vast that its market is expected to grow to USD 272.9 Million by 2022, a growth at a CAGR of 86.0% as compared to its market share in 2016. Over the years, numerous major technology giants have invested in this arena. Some examples include IBM Corporation, Intel, General vision, etc. The most promising developments have been made by Intel with its revolutionary ‘Loihi’ chipsets. Presence of 130,000 neurons optimized for SNN, 128 core chip design, capability of becoming smarter with time, etc. are some of the magnificent feats and features of these chipsets. In this paper, an attempt has been made of understanding what Neuromorphic computing is and how can it be used to solve problems of the present processing systems. Recent developments such as Intel’s Loihi and its possible implementation have also been discussed.


Understanding Silicon Brains – Neuromorphic (Brain-like) Computing


Before familiarizing ourselves with the working and the capabilities of Neuron based computing chips, it’s essential to understand the working and limitations of the traditional processing units based on the Von Neumann Architecture.

John Von Neumann is one of the founding fathers of the modern computation. The most significant contribution of his to the field of computing is the Von Neumann Architecture of Chips and processing units. This consists of a Central Processing Unit made up of Arithmetic/Logic Units (ALUs) and Processor registers (control unit having instructions which get executed or decoded). Besides that, it also houses a memory having data and instructions and input and output mechanisms.

The above setup proved to be so successful that it is still present in most of the computers today, even after 75 years of its introduction. The key criticism of Neumann architecture is popularly known as Von Neumann Bottleneck. It exists because the program memory and the data memory share the same bus (connection/communication system used to transfer data between various components). This single bus can only access one of these two memories at a time; throughput (output relative to input or the amount passing through a system form input to output) is lower than the rate at which CPU can work. As a result these chips waste time (since computations depends on the transfer of information between the CPU and Memory) and energy.

Proposing a solution to the Neumann bottleneck essentially requires that the data reference be made local to the CPU, eliminating the need or presence of a separate memory unit. This will help in saving time and energy, a considerable amount of it, by minimizing latency and maximizing the rate of throughput. Changes being made in the present chipsets have tried to solve the Neumann bottleneck through the use of one or several cache (auxiliary memory facilitating high speed retrieval) between the CPU and Memory unit. This system of minimizing latency worked well until the amount of data being worked with became so large that it became impossible to further optimize the interconnects.

The advent of AI and the memory – heavy datasets that it uses in order to develop further, or in machine learning, has lead to the demand of more computing power, which couldn’t be achieved without making some fundamental changes to the hardware of these chips themselves. This led to the search of a new computing paradigm.

Researchers are sure that the solution lies in either Quantum Computing or Neuromorphic chips – and it’s the later one that’s probably going to be commercialized sooner. Neuromorphic computing was essentially created in order to mimic the working of human brains. A human brain doesn’t consist of separate memory and processing units. Inside human brains, the neurons communicate and share information with one another by using pulse patterns called ‘spikes’. These neurons release chemicals that travel through a small gap called ‘synapse’ and reach other neurons. These synapses have a huge connectivity; one may be connected to 10,000 others.

In this way an action potential gets communicated efficiently and quickly throughout. The timing of these spikes is of great importance for effective and fast communication. Neuromorphic computing makes use of Spiking Neural Networks (SNN) to emulate the working of human brain. Classic computing chipsets are based on transistors which either stay on or off, that is, work in binaries (a zero or a one). SNNs replicate closely the workings of natural neural network found in human brain and are capable of producing outputs other than zeroes and ones.

Neuromorphic systems can be both digital as well as analogue by making use of synapses and Memristor respectively. Memristor, a revolutionary semiconductor device capable of storing as well as processing the data, performs the most critical role in Neuromorphic computing. It helps in properly addressing the Neumann bottleneck by unifying storage and processing, unlike the traditional transistors. Another requirement for a Neuromorphic chip is the control over the amount of current and types of ions that flows in between these artificial synapses (the space between these computer neurons). Depending on the amount of current and the type of ion received, the receiving computer neuron is activated in some way, giving a lot more computational options than just basic yes or no.

This ability to relay a gradient of comprehension from neuron to neuron and have them all work together at the same time means that Neuromorphic chips will soon be more energy efficient than traditional computers – particularly for extremely complex tasks.

Varied Approaches towards Achieving the Reality of Neuromorphic Chipsets


The contemporary material being used in Chip design is largely based on traditional silicon transistors and poses the issue of controllability. The physical properties of traditional materials like silicon make it hard to control the amount of current flow between artificial neurons and harder to control the direction of the flow of current, leading to the current bleeding all over the chip without any organization. To realize the new and revolutionary potential of Neuromorphic computing, new materials as well as chip architecture is required.

It is for the same reason that various companies and educational institutions alike are searching and developing materials that could help in tackling the issue. These developments are discussed below:

MIT’s Brain-on-a-Chip

To tackle the issue of hardware and structural limitations, a new design from an MIT team uses materials different than the traditional silicon only transistors. Scientists at MIT started using single – crystalline silicon and Silicon – Germanium stacked on top of one another. The application of electric field to this device lead to a well controlled flow of ions. In the most recent development, engineers successfully designed a “brain-on-a-chip” which is constituted of thousands of Memristors.


These Memristors are made up of mixtures of silver, copper and silicon and are so efficient that they can help the chip in not only quickly reading, but remembering a visual dataset such that the images stored get reproduced in a sharper form too. This can have game – changing influence in the field of AI which requires processing millions of images, remembering them so as to identify similar objects and reproducing them when required.



IBM’s Neuromorphic System – TrueNorth

In 2011, IBM came up with two prototype chips by the name of ‘Golden Gate’ and ‘San Francisco’ having 256 neurons each. They introduced it as “worm – scale” neurosynaptic core. These chips were capable of simple cognitive exercises like playing pong and recognizing hand written digits. By 2013, the researchers shrunk these chips by 15 times in area and made it 100 times more power efficient to create a neurosynaptic core.


These cores were then arranged on a 64 by 64 grid amounting to 4096 cores arranged in total. This lead to the formation of TrueNorth chip equipped with 1 million neurons and 256 million Synapses. These cores operate independently and in parallel to one another. Each of the cores houses both the memory and the processing capabilities as the information needed is stored locally, just like the neurons present in the human brain. These artificial neurons work on the SNN and are programmed to send a signal, when they reach a threshold limit, called spike. The TrueNorth truly mimics human brain not just in architecture but also in processing capabilities. This chip, housing 5.4 billion transistors, consumes only 73 mlilliwatts of power – thousand times less than that required by a typical CPU.


In terms of applicability, the TrueNorth chip is currently being used in machine learning. These chips can be arranged in a group and can form even efficient units. A group of 16 TrueNorth chips, called the NS16e, is being used in image recognition. The TrueNorth chip is capable of monitoring Dozens of TV feeds and classifies 6000 images per watt, fa greater than the 160 images per watt capability of NVIDIA’s Tesla P4 GPU (based on traditional architecture).


Neurogrid by Stanford

Neurogrid is a system that can simulate a million neurons in real time and are built for imitating the human brain in real time. It consists of a collection of 1million neurons and 6 billion synapses and makes use of analog computation to emulate ion channel activity. The inspiration behind Neurogrid was the fact that human brain can function at just 20 watts of power even after having a collection of 100 Billion Neurons.


The engineers behind Neurogrid took the same transistors as used in power hungry super computers and instead of using them as a digital logic, they used them as analog circuits and designed it in such a manner that the level of voltage corresponded to the level of voltage inside a cell. The end result was that the whole board functioned on less than 3 watts of power. The whole board consists of 16 Neurocores (chips) lay out in a tree network to give one million neurons.


The applicability of Neurogrid ranges over a wide spectrum of arenas. The software used in the board isn’t restricted to being used as a simulation tool. The grid can be used by neuroscientists in understanding or simulating the brain; it could be used by the engineers interested in using spiking neurons for performing interesting applications and can have phenomenal use in the healthcare industry.

It can prove to be clinically viable for people who have lost control over some limb as the damaged area can be bypassed by taking signals directly from the brain and controlling a robotic limb. The low power aspect of this Neuromorphic chipset enables it to be implanted in the human body.

The Human Brain Project (HBP) and its Aftermath – SpiNNaker and BrainScaleS-2

The project started way back in 2013 and employs greater than 500 scientists and engineers in order to facilitate research on the Human Brain. Its aims to focus on different aspects like neuroscience, medicine, computing and technologies inspired by the working and efficiency of the Brain. The aim of the project makes it impossible to avoid research and development on the Neuromorphic Computing systems.


Under the same project, a team at the University of Manchester has developed a million processor cores Neuromorphic supercomputer called SpiNNaker, which stands for Spiking Neural Network Architecture.

The work on early model for this system started in 2006 itself. While the other such projects (as mentioned) looked at finding new materials and changing the traditional ones, the SpiNNaker makes use of traditional substances like cores and routers connecting and communicating with each other in innovative ways. Researchers have shown that they can use the SpiNNaker to simulate the behavior of the human cortex. The primary purpose of the SpiNNaker is to help the neuroscientists in analyzing the workings of the human brain by running extremely large scale and complex real-time simulations. It can also help in determining the specifics of diseases such as the Alzheimer’s and Parkinson’s disease – potentially leading to some extreme breakthroughs.


The BrainScaleS has similar aims as that of SpiNNaker and uses a mixture of the traditional digital as well as analog programming in combination with an operating software and programming language of its own to achieve the computational power. Both of the above mentioned systems are being funded by the Human Brain Project as of now.


Towards Commercially Viable Neuromorphic Chipsets – Intel’s Loihi and It’s Implementations


In the previous chapter, we witnessed many of the recent developments in the arena of Neuromorphic chipsets and realized how many of them are close to achieving application in the day to day life. But there’s a chip that has already come close to commercialization and is being made available to researchers from all over the world in order to find its uses and compatibility with the real life scenarios.

Intel corps’ automated learning and self improving Neuromorphic research chip, Loihi, was first unveiled before the world in November 2017. Among all of the contemporary chipsets based on artificial neural networks, this one is the most advanced and ready to use. The primary aim behind this chip was to provide researchers with a platform to implement Spiking Neural Networks. The Loihi chips have a 128-core design which is built in a manner so as to suit the SNN functionality and is concocted on a 14nm processor technology. The presence of SNN mechanism aids the system in becoming faster as well as smarter as these chips, unlike the conventional AI and ML, don’t require to be taught and instead they become smarter as they are used and exposed to the simulations.

One such chip houses a total of 130,000 artificial neurons, more than 13o million synapses and over 2 Billion transistors which are further able to connect with thousands more. These chips can also be used in combinations for performing more complex calculations. Another notable aspect of these chips is their high power efficiency and low latency. The Loihi systems, even though limited as of now, are being provided through a cloud platform to researchers in order to test its limits and to come with game changing applications. Through making the chipset widely available to the community of researchers, Intel’s chip has been found useful in a variety of different situations and has made it evident that the potential is huge. The chip’s performance starkly stands out in the arena of “Voice command recognition, gesture recognition, image retrieval, search functionality and robotics; all interesting areas dealing with the next generation AI technology.”

Intel labs established Intel Neuromorphic Research Community (INRC) which comprises of a community of governmental as well as private organizations having accessibility to Intel’s Loihi hardware as well as funding in projects dealing with Neuromorphic technology.

As a result of such wide accessibility to the key research groups, engineers form the Intel labs and the Cornell University successfully created a setup through which Loihi was able to detect as well as remember the smell of hazardous chemicals in presence of “significant noise and occlusion.” The chip was able to work far more accurately as compared to the traditional ways which require exposure to 3000 the times of dataset. Loihi was able to identify and memorize distinct odors of 10 harmful chemicals only after a single exposure.

The chip was subjected to a dataset of activities of 72 chemicals and it was able to recognize the distinct odors by associating the neural activities related to each one of them and remembering it.

Another perfect applicability of the chip was found in the infamous Pohoiki Springs Neuromorphic system. The system is remarkable in the fact that it is composed of 768 Loihi chips arranged together on twenty four “Arria10 FPGA Nahuku” expansion boards, each having a set of 32 chips. The rack mounted system has a connection of 100 million neurons, more neurons than are found in a Nile crocodile and roughly equivalent to that found in a fruit bat or a mole rat. This system would be made available to the members of INRC in order to facilitate the most complex problems.

Earlier, Kapoho Bay, Intel’s most compact Neuromorphic-system, made up of only 2 Loihi chips and housing 262,000 neurons has displayed exceptional performance in terms of identifying Braille when used with fake skin, recognize direction and odor patterns, all while working on a scale of 10 mill watts of power.

Accenture, a part of Intel’s INRC community, is involved in experimenting with Loihi’s voice recognition capabilities in relation with the self – propelling AI models. The processor is being subjected to “open source” voice recordings dealing with basic instructions like on/off configuration commands. University of California, another member of Intel’s INRC is testing the limits of Gesture control and recognition. The research has revealed that since Loihi is based on Neuromorphic computing, it doesn’t require the storage and exposure to a large amount dataset and can instead learn along the way as it is exposed to the real world use.

The plethora of potential uses and capability of Loihi helps in ensuring a more natural communication between humans and machines as it helps in eradicating the use of rigid commands and allows a more flexible eco system with cognitive abilities like that of a human brain. Still under development, it would be exciting to wait and experience what the coming years of technological advancements may produce.


Conclusion


Von Neumann Architecture of building processors and computing units was the one which started the rapid technological growth and drove innovation throughout the years. As the tasks became more complex and time more limited, some key changes were introduced to the arrangement of components of a Neumann chip. The advent of AI and Machine Learning demanded more robust systems with not only the capability of handling large amount of calculations, but also the capability of storing such large datasets, all while being energy efficient and minimizing the power consumption.

The pressing demands of budding technology lead the researchers into exploring the alternatives to traditional Neumann based architecture. The inspiration for solution came from none other than the greatest computing machine ever, that is, the human brain, this lead to the era of Neuromorphic Computing. In the paper, we tried to understand what exactly Neuromorphic chips are and how would they drive the Moore’s law in the coming generation. We discussed, in brief, the product of years of research of distinct institutions and companies and how would they lead to a dynamic future, offering a more fluid experience with the technology surrounding us.


References


  1. Intel. n.d. Neuromorphic Computing - Next Generation of AI. [online] Available at: <https://www.intel.in/content/www/in/en/research/neuromorphic-computing.html> [Accessed 20 April 2021].

  2. Best, J., 2020. What is neuromorphic computing? Everything you need to know about how it is changing the future of computing | ZDNet. [online] ZDNet. Available at: <https://www.zdnet.com/article/what-is-neuromorphic-computing-everything-you-need-to-know-about-how-it-will-change-the-future-of-computing/> [Accessed 20 April 2021].

  3. Winkle, W., 2020. Neuromorphic computing: The long path from roots to real life. [online] VentureBeat. Available at: <https://venturebeat.com/2020/12/15/neuromorphic-computing-the-long-path-from-roots-to-real-life/> [Accessed 20 April 2021].

  4. Osborne, C., 2020. Intel, partners make new strides in Loihi neuromorphic computing chip development | ZDNet. [online] ZDNet. Available at: <https://www.zdnet.com/article/intel-releases-new-benchmarks-progress-on-loihi-neuromorphic-computing-chip/> [Accessed 20 April 2021].

  5. Seeker. (2018). Neuromorphic Computing Is a Big Deal for A.I., But What Is It?. Elements. [Online Video]. Available at: <https://youtu.be/TetLY4gPDpo> [Accessed: 21 April 2021].

  6. Stanford. (2014). Stanford engineer creates circuit board that mimics the human brain. [Online Video]. Available at: <https://youtu.be/D3T1tiVcRDs> [Accessed: 21 April 2021].

  7. manchester.ac.uk. 2021. 'Human brain' supercomputer with 1 million processors switched on for first time. [online] Available at: <https://www.manchester.ac.uk/discover/news/human-brain-supercomputer-with-1million-processors-switched-on-for-first-time/> [Accessed 21 April 2021].

  8. Chu, J., 2020. Engineers put tens of thousands of artificial brain synapses on a single chip. [online] MIT News | Massachusetts Institute of Technology. Available at: <https://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608> [Accessed 21 April 2021].

  9. Modha, D., n.d. IBM Research: Brain-inspired Chip. [online] Research.ibm.com. Available at: <https://www.research.ibm.com/articles/brain-chip.shtml> [Accessed 21 April 2021].

  10. HT Tech. 2020. Computers than can smell? Intel’s new chip can sniff hazardous chemicals. [online] Available at: <https://tech.hindustantimes.com/tech/news/computers-than-can-smell-intel-s-new-chip-can-sniff-hazardous-chemicals-story-MEjQ9GD6s3eP7xnrvyA5rK.html> [Accessed 21 April 2021].

  11. Wiggers, K., 2020. Intel debuts Pohoiki Springs, a powerful neuromorphic research system for AI workloads. [online] venturebeat.com. Available at: <https://venturebeat.com/2020/03/18/intel-debuts-pohoiki-springs-a-powerful-neuromorphic-research-system-for-ai-workloads/> [Accessed 22 April 2021].

The Indian Society of Artificial Intelligence and Law is a technology law think tank founded by Abhivardhan in 2018. Our mission as a non-profit industry body for the analytics & AI industry in India is to promote responsible development of artificial intelligence and its standardisation in India.

 

Since 2022, the research operations of the Society have been subsumed under VLiGTA® by Indic Pacific Legal Research.

ISAIL has supported two independent journals, namely - the Indic Journal of International Law and the Indian Journal of Artificial Intelligence and Law. It also supports an independent media and podcast initiative - The Bharat Pacific.

bottom of page