Nvidia will leak Grace Hopper architectural details to Hot Chips

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Nvidia engineers are delivering four technical presentations at next week's Hot Chips virtual conference, focusing on the Grace central processing unit (CPU), the Hopper graphics processing unit (GPU), the system on Orin chip (SoC) and the NVLink network switch.< /p>

They represent all of the company's plans to create a high-end data center infrastructure with a full stack of chips, hardware, and software.

The presentations will share new details about Nvidia's platforms for AI, edge computing, and high-performance computing, said Dave Salvator, director of product marketing for AI Inference, Benchmarking, and Cloud at Nvidia, in an interview with VentureBeat.

If there's a visible trend in the discussions, they all represent how widely accepted accelerated computing has been in recent years in the design of modern data centers and systems at the network edge, said Salvador. CPUs are no longer expected to do all the heavy lifting themselves.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

As for Hot Chips, Salvator said, "Historically it's a show where architects get together with architects to have a collegial environment, even though they're competitors. In years past, the show has tended to be a bit CPU-centric with the occasional throttle but I think the interesting trend, especially looking at the advanced program that's already been posted on the AI ​​chip website, is that you see a lot more accelerators. That's certainly from us, but also from others. And I think that's just an acknowledgment that you know, that these accelerators are absolute game changers for the data center. That's a macro trend that I think we've seen."

He added: "I would say I think we've probably made the most significant progress in that regard. It's a combination of things, right? It's not just GPUs that are good at something It's a huge amount of concerted work that we've done, really over a decade, to get to where we are today."

Speaking at a virtual Hot Chips event (normally held on college campuses in Silicon Valley), Nvidia will address the annual gathering of processor and system architects. They will disclose performance figures and other technical details for Nvidia's first server processor, GPU Hopper, the latest version of the NVSwitch interconnect chip, and the Nvidia Jetson Orin System on Module (SoM).

>

The presentations provide new insights into how the Nvidia platform will achieve new levels of performance, efficiency, scalability, and security.

Specifically, the discussions demonstrate a design philosophy of innovating across the full stack of chips, systems, and software where GPUs, CPUs, and DPUs act as peer processors, Salvator said. Together, they are creating a platform that is already running artificial intelligence, data analytics, and high-performance computing at cloud service providers, supercomputing centers, enterprise data centers, and stand-alone systems.

Inside the server processor

Nvidia will leak Grace Hopper architectural details to Hot Chips

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Nvidia engineers are delivering four technical presentations at next week's Hot Chips virtual conference, focusing on the Grace central processing unit (CPU), the Hopper graphics processing unit (GPU), the system on Orin chip (SoC) and the NVLink network switch.< /p>

They represent all of the company's plans to create a high-end data center infrastructure with a full stack of chips, hardware, and software.

The presentations will share new details about Nvidia's platforms for AI, edge computing, and high-performance computing, said Dave Salvator, director of product marketing for AI Inference, Benchmarking, and Cloud at Nvidia, in an interview with VentureBeat.

If there's a visible trend in the discussions, they all represent how widely accepted accelerated computing has been in recent years in the design of modern data centers and systems at the network edge, said Salvador. CPUs are no longer expected to do all the heavy lifting themselves.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

As for Hot Chips, Salvator said, "Historically it's a show where architects get together with architects to have a collegial environment, even though they're competitors. In years past, the show has tended to be a bit CPU-centric with the occasional throttle but I think the interesting trend, especially looking at the advanced program that's already been posted on the AI ​​chip website, is that you see a lot more accelerators. That's certainly from us, but also from others. And I think that's just an acknowledgment that you know, that these accelerators are absolute game changers for the data center. That's a macro trend that I think we've seen."

He added: "I would say I think we've probably made the most significant progress in that regard. It's a combination of things, right? It's not just GPUs that are good at something It's a huge amount of concerted work that we've done, really over a decade, to get to where we are today."

Speaking at a virtual Hot Chips event (normally held on college campuses in Silicon Valley), Nvidia will address the annual gathering of processor and system architects. They will disclose performance figures and other technical details for Nvidia's first server processor, GPU Hopper, the latest version of the NVSwitch interconnect chip, and the Nvidia Jetson Orin System on Module (SoM).

>

The presentations provide new insights into how the Nvidia platform will achieve new levels of performance, efficiency, scalability, and security.

Specifically, the discussions demonstrate a design philosophy of innovating across the full stack of chips, systems, and software where GPUs, CPUs, and DPUs act as peer processors, Salvator said. Together, they are creating a platform that is already running artificial intelligence, data analytics, and high-performance computing at cloud service providers, supercomputing centers, enterprise data centers, and stand-alone systems.

Inside the server processor

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow