Nvidia’s GPU-powered AI is creating chips with ‘better-than-human design’

Nvidia was quick to jump on the artificial intelligence bus, with many of its consumer-facing technologies such as Deep Learning Super Sampling (DLSS) and AI-accelerated denoising exemplifying this. However, it also found many uses for AI in its silicon development process, and like Nvidia chief scientist Bill Dally said at a GTC conference, even designing new hardware.

Dally describes some use cases for AI in his own process of developing the latest and greatest. best graphics cards (among other things), as noted by HPC wire.

“It’s natural as an AI expert to want to take that AI and use it to design better chips,” says Dally.

“We do this in two different ways. The first and most obvious is to use the computer-aided design tools that we have. For example, we have one that maps where energy is used in our GPUs, and predicts to what extent the mains voltage drops – what’s called an IR drop to resistance drop in current times. Running this in a conventional CAD tool takes three hours.”

“…what we’d like to do is train an AI model to get the same data; we do that across multiple projects, and then we can basically feed the power map. The inference time is just three seconds. Sure, it’s 18 minutes if you include the time for resource extraction.

“…we were able to get very accurate power estimates much faster than with conventional tools and in a small fraction of the time,” continues Dally.

Dally mentions other ways AI can be useful for developing next-generation chips. One is the prediction of parasites, which are essentially unwanted elements in components or designs that can be inefficient or simply cause something not to work as intended. Instead of using hours of human labor to scope out, you can reduce the number of steps required in circuit design by having an AI do it. Sort of like a digital parasite sniffer dog.

Furthermore, Dally explains that crucial design choices in Nvidia’s chip layout design can be aided by AI. Think of this job as avoiding traffic jams with transistors and you probably wouldn’t be that far away. AI might have a future ahead of simply pre-warning designers where these traffic jams might occur, which can save a lot of time in the long run.

Nvidia’s latest Hopper H100 GPU. (Image credit: Nvidia)

Perhaps the most interesting of all the use cases Dally explains is automating the migration of standard cells. OK, no sound all this interesting, but actually it is. Essentially, it’s a way to automate the process of migrating a cell, as a fundamental building block of a computer chip, to a newer process node.

So this is like an Atari video game, but it’s a video game for correcting design rules errors in a standard cell.

Bill Dally, Nvidia

“So every time we have a new technology, let’s say we’re going from a seven nanometer technology to a five nanometer technology, we have a library of cells. A cell is something like an AND gate and an OR gate, a full adder. In fact, we have many thousands of these cells that need to be redesigned in new technology with a very complex set of design rules,” says Dally.

“We basically do this by using reinforcement learning to place the transistors. But more importantly, once they are placed there are usually a lot of errors in the design rules, and it happens almost like a video game. Reinforcement learning is good. One of the great examples is the use of reinforcement learning for Atari video games. It’s like an Atari video game, but it’s a video game for correcting design rule errors in a standard cell. Analyzing and correcting those rule errors of design with reinforcement learning, we can basically complete the design of our standard cells.”

The tool Nvidia uses for this automated cell migration is called NVCell and reportedly 92% of the cell library can be migrated using this tool without errors. So 12% of these cells were smaller than human-engineered cells.

“This does two things for us. One is a huge labor savings. It’s a group of about 10 people that will take almost a year to port a new technology library. Now we can do that with some GPUs running for a few days. So , humans can work on the 8% of cells that weren’t made automatically. And in many cases, we also end up with a better design. So it’s labor-saving and better than human design.”

So Nvidia is using AI accelerated by its own GPUs to speed up GPU development. Nice. And of course most of these developments will be useful in any form of chip manufacturing, not just GPUs.

It’s a smart use of time for Nvidia: developing these AI tools for their own development not only speeds up their own processes, but also allows them to better sell the benefits of AI to their customers, to whom they provide GPUs to accelerate AI with . So I imagine Nvidia sees this as a win-win scenario.

You can check out the full conversation with Dally on the website. Nvidia websitealthough you will need to sign up for the Nvidia Developer Program to do so.

Leave a Comment