Your browser does not support JavaScript!

Author: Tronserve admin

Monday 2nd August 2021 10:44 PM

Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020


image cap
147 Views

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to uncover an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are developed with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

 

Intel and its research partners are just beginning to test what significant neural systems like Pohoiki Beach can do, but so far the proof points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

 

“We’re swiftly gathering results and data that there are definite benefits… commonly in the domain of efficiency. Virtually every one that we benchmark…we find important gains in this architecture,” he says.

 

Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We made scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

 

Finding algorithms that run well on an 8-million-neuron system and optimizing those algorithms in software is a considerable effort, he says. Still, the payoff could be huge. Neural networks that are more brain-like, such as Loihi, could be immune to some of the artificial intelligence’s—for lack of a better word—dumbness.

 

For example, today’s neural networks suffer from something called catastrophic forgetting. If you tried to teach a trained neural network to acknowledge something new—a new road sign, say—by simply exposing the network to the new input, it would disrupt the network so badly that it would become damaging at acknowledging anything. To avoid this, you have to perfectly retrain the network from the ground up. (DARPA’s Lifelong Learning, or L2M, program is dedicated to solving this problem.)

 

(Here’s my favorite analogy: Say you coached a basketball team, and you raised the net by 30 centimeters while nobody was looking. The players would miss a bunch at first, but they’d figure things out quickly. If those players were like today’s neural networks, you’d have to pull them off the court and teach them the entire game over again—dribbling, passing, everything.)

 

Loihi can run networks that might be immune to catastrophic forgetting, meaning it learns a bit more like a human. In fact, there’s research through a research collaboration with Thomas Cleland’s group at Cornell University, that Loihi can obtain what’s called one-shot learning. That is, learning a new feature after being exposed to it only once. The Cornell group displayed this by abstracting a model of the olfactory system so that it would run on Loihi. When exposed to a new virtual scent, the system not only didn't catastrophically forget everything else it had smelled, it learned to know the new scent just from the single exposure.

 

Loihi might also be able to run feature-extraction algorithms that are immune to the kinds of adversarial attacks that befuddle today’s image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. (Disturbingly, humans are not totally immune to such attacks.)

 

Professionals have also been using Loihi to augment real-time control for robotic systems. For example, last week at the Telluride Neuromorphic Cognition Engineering Workshop—an event Davies called “summer camp for neuromorphics nerds”—researchers were hard at work using a Loihi-based system to control a foosball table. “It strikes people as crazy,” he says. “But it’s a nice illustration of neuromorphic technology. It’s fast, requires quick response, quick planning, and anticipation. These are what neuromorphic chips are good at.”



This article is originally posted on IEEESPECTRUM.com


Share this post:


This is the old design: Please remove this section after work on the functionalities for new design

Posted on : Monday 2nd August 2021 10:44 PM

Intel’s Neuromorphic System Hits 8 Million Neurons, 100 Million Coming by 2020


none
Posted by  Tronserve admin
image cap

At the DARPA Electronics Resurgence Initiative Summit today in Detroit, Intel plans to uncover an 8-million-neuron neuromorphic system comprising 64 Loihi research chips—codenamed Pohoiki Beach. Loihi chips are developed with an architecture that more closely matches the way the brain works than do chips designed to do deep learning or other forms of AI. For the set of problems that such “spiking neural networks” are particularly good at, Loihi is about 1,000 times as fast as a CPU and 10,000 times as energy efficient. The new 64-Loihi system represents the equivalent of 8-million neurons, but that’s just a step to a 768-chip, 100-million-neuron system that the company plans for the end of 2019.

 

Intel and its research partners are just beginning to test what significant neural systems like Pohoiki Beach can do, but so far the proof points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel.

 

“We’re swiftly gathering results and data that there are definite benefits… commonly in the domain of efficiency. Virtually every one that we benchmark…we find important gains in this architecture,” he says.

 

Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We made scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.”

 

Finding algorithms that run well on an 8-million-neuron system and optimizing those algorithms in software is a considerable effort, he says. Still, the payoff could be huge. Neural networks that are more brain-like, such as Loihi, could be immune to some of the artificial intelligence’s—for lack of a better word—dumbness.

 

For example, today’s neural networks suffer from something called catastrophic forgetting. If you tried to teach a trained neural network to acknowledge something new—a new road sign, say—by simply exposing the network to the new input, it would disrupt the network so badly that it would become damaging at acknowledging anything. To avoid this, you have to perfectly retrain the network from the ground up. (DARPA’s Lifelong Learning, or L2M, program is dedicated to solving this problem.)

 

(Here’s my favorite analogy: Say you coached a basketball team, and you raised the net by 30 centimeters while nobody was looking. The players would miss a bunch at first, but they’d figure things out quickly. If those players were like today’s neural networks, you’d have to pull them off the court and teach them the entire game over again—dribbling, passing, everything.)

 

Loihi can run networks that might be immune to catastrophic forgetting, meaning it learns a bit more like a human. In fact, there’s research through a research collaboration with Thomas Cleland’s group at Cornell University, that Loihi can obtain what’s called one-shot learning. That is, learning a new feature after being exposed to it only once. The Cornell group displayed this by abstracting a model of the olfactory system so that it would run on Loihi. When exposed to a new virtual scent, the system not only didn't catastrophically forget everything else it had smelled, it learned to know the new scent just from the single exposure.

 

Loihi might also be able to run feature-extraction algorithms that are immune to the kinds of adversarial attacks that befuddle today’s image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. (Disturbingly, humans are not totally immune to such attacks.)

 

Professionals have also been using Loihi to augment real-time control for robotic systems. For example, last week at the Telluride Neuromorphic Cognition Engineering Workshop—an event Davies called “summer camp for neuromorphics nerds”—researchers were hard at work using a Loihi-based system to control a foosball table. “It strikes people as crazy,” he says. “But it’s a nice illustration of neuromorphic technology. It’s fast, requires quick response, quick planning, and anticipation. These are what neuromorphic chips are good at.”



This article is originally posted on IEEESPECTRUM.com

Tags:
darpa electronics resurgence loihi