Nvidia adds features to manage AI at the edge

We're excited to bring Transform 2022 back in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Sign up today!

Nvidia already enjoys a global reputation and #1 market share designation for manufacturing industry-leading graphics processing units (GPUs) for rendering images, video, and graphics. 2D or 3D animations to display. Lately, he has taken advantage of his success to venture into computer territory, but without manufacturing hardware.

A year after launching Nvidia Fleet Command, a cloud-based service for deploying, managing, and scaling AI applications at the edge, the company has launched new features that help to reduce the distance between these servers by improving the management of AI at the edge. deployments around the world.

Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to move it to a centralized cloud or data center. Edge computing speeds up analysis by reducing the latency involved in moving data back and forth. Fleet Command is designed to allow control of such deployments through its cloud interface.

"In the world of AI, distance is not the friend of many IT managers," wrote Troy Estes, product marketing manager at Nvidia, in a blog post. "Unlike data centers, where resources and personnel are consolidated, companies deploying AI applications at the edge need to think about how to handle the extreme nature of edge environments."

Event

Transform 2022

Join us at the leading Applied AI event for enterprise business and technology decision makers on July 19 and virtually July 20-28.

register here

Often the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment. With the vast amount of data that AI applications require, it takes high-performance networking and data management to make these deployments work well enough to meet SLAs.

"You can run AI in the cloud," Amanda Saunders, senior AI video manager at Nvidia, told VentureBeat. “But generally the latency to send stuff back and forth – well, a lot of these locations don't have strong network connections; they may appear to be connected, but they are not always. Fleet Command allows you to deploy these apps to the edge while maintaining control over them so you can remotely access not just the system, but the app itself, so you can see everything that's going on. »

With the scale of some edge AI deployments, organizations can have up to thousands of independent locations that need to be managed by IT. Sometimes these need to operate in extremely remote locations, such as oil rigs, weather gauges, distributed retail stores, or industrial facilities. These connections are not for the faint of heart in networking.

Nvidia Fleet Command offers a managed platform for orchestrating containers using Kubernetes distribution that makes provisioning and deploying relatively easy...

Nvidia adds features to manage AI at the edge

We're excited to bring Transform 2022 back in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Sign up today!

Nvidia already enjoys a global reputation and #1 market share designation for manufacturing industry-leading graphics processing units (GPUs) for rendering images, video, and graphics. 2D or 3D animations to display. Lately, he has taken advantage of his success to venture into computer territory, but without manufacturing hardware.

A year after launching Nvidia Fleet Command, a cloud-based service for deploying, managing, and scaling AI applications at the edge, the company has launched new features that help to reduce the distance between these servers by improving the management of AI at the edge. deployments around the world.

Edge computing is a distributed computing system with its own set of resources that allows data to be processed closer to its origin instead of having to move it to a centralized cloud or data center. Edge computing speeds up analysis by reducing the latency involved in moving data back and forth. Fleet Command is designed to allow control of such deployments through its cloud interface.

"In the world of AI, distance is not the friend of many IT managers," wrote Troy Estes, product marketing manager at Nvidia, in a blog post. "Unlike data centers, where resources and personnel are consolidated, companies deploying AI applications at the edge need to think about how to handle the extreme nature of edge environments."

Event

Transform 2022

Join us at the leading Applied AI event for enterprise business and technology decision makers on July 19 and virtually July 20-28.

register here

Often the nodes connecting data centers or clouds and a remote AI deployment are difficult to make fast enough to use in a production environment. With the vast amount of data that AI applications require, it takes high-performance networking and data management to make these deployments work well enough to meet SLAs.

"You can run AI in the cloud," Amanda Saunders, senior AI video manager at Nvidia, told VentureBeat. “But generally the latency to send stuff back and forth – well, a lot of these locations don't have strong network connections; they may appear to be connected, but they are not always. Fleet Command allows you to deploy these apps to the edge while maintaining control over them so you can remotely access not just the system, but the app itself, so you can see everything that's going on. »

With the scale of some edge AI deployments, organizations can have up to thousands of independent locations that need to be managed by IT. Sometimes these need to operate in extremely remote locations, such as oil rigs, weather gauges, distributed retail stores, or industrial facilities. These connections are not for the faint of heart in networking.

Nvidia Fleet Command offers a managed platform for orchestrating containers using Kubernetes distribution that makes provisioning and deploying relatively easy...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow