IBM announces on-chip artificial intelligence hardware

Register now for your free virtual pass to the November 9 Low-Code/No-Code Summit. Hear from the leaders of Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.

Recent years have seen a growing demand for artificial intelligence (AI) acceleration hardware. IBM has taken note.

In the early days of AI, commercial CPU and GPU technologies were sufficient to handle the data sizes and computational parameters of the technology. But with the emergence of larger datasets and deep learning models, there is now a clear need for purpose-built AI hardware acceleration.

IBM now throws its hat into the hardware acceleration circle with this week's announcement of the IBM Artificial Intelligence Unit (AIU). The AIU is a complete system-on-chip board that can connect to servers through a standard PCIe interface.

IBM's AI Unit is a full on-chip AI accelerator card that plugs into industry standard PCIe slots. Credit: IBM Research

The AIU is based on the same AI core that's built into IBM's Tellum chip, which powers IBM z16-series mainframes, including LinuxOne Emperor 4. Each AIU has 32 cores, developed with a 5nm (nanometer) process, while the Tellum processor's AI cores are 7nm.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

“At IBM Research, we have a very strong microarchitecture and circuit design team that has focused on high performance designs primarily for HPC [high performance computing] and servers for many decades,” Jeff Burns, director at IBM Research AI Hardware Center, told VentureBeat. "So the same people started thinking about accelerating deep learning."

Accelerating AI with IBM's Artificial Intelligence Unit (AIU)

The basic ideas of IBM's AI Accelerator began to be developed in 2017 and have been developed in the years since.

The work of accelerating AI has been taken over by the IBM Systems group, which has integrated the technology into processors running on mainframes. Burns said his team also wanted to design a complete system-on-chip with a PCIe card to create a pluggable AI acceleration device that could be integrated into IBM x86 cloud, IBM Power enterprise servers, or partner servers. from IBM. could build.

The AIU is not a CPU or GPU, but rather an application-specific integrated circuit (ASIC). Burns explained that instead of taking GPU technology and optimizing it for AI, IBM's approach was designed from the ground up for AI. As such, the...

IBM announces on-chip artificial intelligence hardware

Register now for your free virtual pass to the November 9 Low-Code/No-Code Summit. Hear from the leaders of Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.

Recent years have seen a growing demand for artificial intelligence (AI) acceleration hardware. IBM has taken note.

In the early days of AI, commercial CPU and GPU technologies were sufficient to handle the data sizes and computational parameters of the technology. But with the emergence of larger datasets and deep learning models, there is now a clear need for purpose-built AI hardware acceleration.

IBM now throws its hat into the hardware acceleration circle with this week's announcement of the IBM Artificial Intelligence Unit (AIU). The AIU is a complete system-on-chip board that can connect to servers through a standard PCIe interface.

IBM's AI Unit is a full on-chip AI accelerator card that plugs into industry standard PCIe slots. Credit: IBM Research

The AIU is based on the same AI core that's built into IBM's Tellum chip, which powers IBM z16-series mainframes, including LinuxOne Emperor 4. Each AIU has 32 cores, developed with a 5nm (nanometer) process, while the Tellum processor's AI cores are 7nm.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

“At IBM Research, we have a very strong microarchitecture and circuit design team that has focused on high performance designs primarily for HPC [high performance computing] and servers for many decades,” Jeff Burns, director at IBM Research AI Hardware Center, told VentureBeat. "So the same people started thinking about accelerating deep learning."

Accelerating AI with IBM's Artificial Intelligence Unit (AIU)

The basic ideas of IBM's AI Accelerator began to be developed in 2017 and have been developed in the years since.

The work of accelerating AI has been taken over by the IBM Systems group, which has integrated the technology into processors running on mainframes. Burns said his team also wanted to design a complete system-on-chip with a PCIe card to create a pluggable AI acceleration device that could be integrated into IBM x86 cloud, IBM Power enterprise servers, or partner servers. from IBM. could build.

The AIU is not a CPU or GPU, but rather an application-specific integrated circuit (ASIC). Burns explained that instead of taking GPU technology and optimizing it for AI, IBM's approach was designed from the ground up for AI. As such, the...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow