Your browser does not support JavaScript!

Author: Tronserve admin

Monday 2nd August 2021 01:15 AM

Nvidia Chip Takes Deep Learning to the Extremes


image cap
147 Views

There is no doubt that GPU-powerhouse Nvidia would like to have a solution for all size scales of AI — from enormous data center jobs down to the always-on, low-power neural networks that listen for wakeup words in voice assistants.


This moment, that would take numerous different technologies, because none of them scale up or down extremely well. It is undoubtedly preferable to be able to deploy one technology instead many. Therefore, according to Nvidia chief scientist Bill Dally, the company has been seeking to answer the question: “Can you build something scalable... while still maintaining competitive performance-per-watt across the entire spectrum?” 


It seems to be like the answer is yes. Last month at the VLSI Symposia in Kyoto, Nvidia highlighted a small test chip that can work on its own to do the low-end jobs or be linked tightly together with up to 36 of its kin in a single module to do deep learning’s heavy lifting. And it does it all while achieving roughly the same top-class performance.


The individual accelerator chip is tailored to execute the execution side of deep learning instead the training part. Engineers typically measure the performance of such “inferencing” chips in terms of how many operations they can do per joule of energy or millimeter of area. A single one of Nvidia’s prototype chips peaks at 4.01 tera-operations per second (1000 billion operations per second) and 1.29 TOPS per millimeter. In comparison to the previous prototypes from other groups using the same precision the single chip was at least 16 times as area efficient and 1.7 times as energy efficient. But linked together into a 36-chip system it reached 127.8 TOPS. That’s a 32-fold performance boost.


Companies have mostly been tuning their technologies to work suitable for their particular niches. To illustrate, Irvine, Calif.,-startup Syntiant uses analog processing in flash-memory to boost performance for very-low power, low-demand applications. While Google’s original tensor processing unit’s powers would be wasted on anything other than the data center’s high-performance, high-power environment.


With this research Nvidia is attempting to demonstrate that one technology can operate well in all those situations. Or in any case it can if the chips are linked together with Nvidia’s mesh network in a multichip module. These modules are basically small printed circuit boards or slivers of silicon that hold different chips in a way that they can be treated as one large IC. They are becoming even more popular, because they allow systems composed of a couple of smaller chips—often called chiplets—instead of a single larger and more expensive chip.


“The multichip module option has a lot of advantages not just for future scalable [deep learning] accelerators but for building version of our products that have accelerators for different functions,” explains Dally.


Factor to the Nvidia multichip module’s ability to bind together the new deep learning chips is an interchip network that uses a technology called ground-referenced signaling. As its name signifies, GRS uses the difference between a voltage signal on a wire and a common ground to transfer data, while reducing many of the known pitfalls of that approach. It can transmit 25 gigabits/s using a single wire, whereas most technologies would need a pair of wires to reach that speed. Using single wires boosts how much data you can stream off of each millimeter of the edge of the chip to a whopping terabit per second. What’s more, GRS’s power consumption is a mere picojoule per bit.


“It’s a technology that we developed to basically give the option of building multichip modules on an organic substrate, as opposed to on a silicon interposer, which is much more expensive technology,” says Dally.


The accelerator chip presented at VLSI is hardly the last word on AI from Nvidia. Dally says they’ve already completed a version that ultimately doubles this chip’s TOPS/W. “We believe we can do better than that,” he says. His team aspires to find inferencing accelerating techniques that blow past the VLSI prototype’s 9.09 TOPS/W and reaches 200 TOPS/W while still being scalable.


Source: IEEE Spectrum


Share this post:


This is the old design: Please remove this section after work on the functionalities for new design

Posted on : Monday 2nd August 2021 01:15 AM

Nvidia Chip Takes Deep Learning to the Extremes


none
Posted by  Tronserve admin
image cap

There is no doubt that GPU-powerhouse Nvidia would like to have a solution for all size scales of AI — from enormous data center jobs down to the always-on, low-power neural networks that listen for wakeup words in voice assistants.


This moment, that would take numerous different technologies, because none of them scale up or down extremely well. It is undoubtedly preferable to be able to deploy one technology instead many. Therefore, according to Nvidia chief scientist Bill Dally, the company has been seeking to answer the question: “Can you build something scalable... while still maintaining competitive performance-per-watt across the entire spectrum?” 


It seems to be like the answer is yes. Last month at the VLSI Symposia in Kyoto, Nvidia highlighted a small test chip that can work on its own to do the low-end jobs or be linked tightly together with up to 36 of its kin in a single module to do deep learning’s heavy lifting. And it does it all while achieving roughly the same top-class performance.


The individual accelerator chip is tailored to execute the execution side of deep learning instead the training part. Engineers typically measure the performance of such “inferencing” chips in terms of how many operations they can do per joule of energy or millimeter of area. A single one of Nvidia’s prototype chips peaks at 4.01 tera-operations per second (1000 billion operations per second) and 1.29 TOPS per millimeter. In comparison to the previous prototypes from other groups using the same precision the single chip was at least 16 times as area efficient and 1.7 times as energy efficient. But linked together into a 36-chip system it reached 127.8 TOPS. That’s a 32-fold performance boost.


Companies have mostly been tuning their technologies to work suitable for their particular niches. To illustrate, Irvine, Calif.,-startup Syntiant uses analog processing in flash-memory to boost performance for very-low power, low-demand applications. While Google’s original tensor processing unit’s powers would be wasted on anything other than the data center’s high-performance, high-power environment.


With this research Nvidia is attempting to demonstrate that one technology can operate well in all those situations. Or in any case it can if the chips are linked together with Nvidia’s mesh network in a multichip module. These modules are basically small printed circuit boards or slivers of silicon that hold different chips in a way that they can be treated as one large IC. They are becoming even more popular, because they allow systems composed of a couple of smaller chips—often called chiplets—instead of a single larger and more expensive chip.


“The multichip module option has a lot of advantages not just for future scalable [deep learning] accelerators but for building version of our products that have accelerators for different functions,” explains Dally.


Factor to the Nvidia multichip module’s ability to bind together the new deep learning chips is an interchip network that uses a technology called ground-referenced signaling. As its name signifies, GRS uses the difference between a voltage signal on a wire and a common ground to transfer data, while reducing many of the known pitfalls of that approach. It can transmit 25 gigabits/s using a single wire, whereas most technologies would need a pair of wires to reach that speed. Using single wires boosts how much data you can stream off of each millimeter of the edge of the chip to a whopping terabit per second. What’s more, GRS’s power consumption is a mere picojoule per bit.


“It’s a technology that we developed to basically give the option of building multichip modules on an organic substrate, as opposed to on a silicon interposer, which is much more expensive technology,” says Dally.


The accelerator chip presented at VLSI is hardly the last word on AI from Nvidia. Dally says they’ve already completed a version that ultimately doubles this chip’s TOPS/W. “We believe we can do better than that,” he says. His team aspires to find inferencing accelerating techniques that blow past the VLSI prototype’s 9.09 TOPS/W and reaches 200 TOPS/W while still being scalable.


Source: IEEE Spectrum

Tags:
nvidia chip