NVIDIA's new AI model quickly generates objects and characters for virtual worlds

NVIDIA seeks to simplify the creation of virtual 3D worlds with a new model of artificial intelligence. GET3D can generate characters, buildings, vehicles and other types of 3D objects, says NVIDIA. The model should also be able to create shapes quickly. The company notes that GET3D can generate around 20 objects per second using a single GPU.

Researchers trained the model using synthetic 2D images of 3D shapes taken from multiple angles. NVIDIA claims it only took two days to feed around 1 million frames into GET3D using A100 Tensor Core GPUs.

The model can create objects with "high-fidelity textures and intricate geometric detail," NVIDIA's Isha Salian wrote in a blog post. The shapes made by GET3D "are in the form of a triangular mesh, like a papier-mâché model, covered with a textured material," Salian added.

Users should be able to quickly import the objects into game engines, 3D modelers and movie renderers for editing, as GET3D will create them in compatible formats. This means it could be much easier for developers to create dense virtual worlds for games and the metaverse. NVIDIA cited robotics and architecture as other use cases.

The company said that based on a set of car imaging data, GET3D was able to generate sedans, trucks, race cars, and vans. It can also produce foxes, rhinos, horses, and bears after being trained on animal pictures. As you'd expect, NVIDIA notes that the larger and more diverse the learning set provided to GET3D, "the more varied and detailed the output."

I

This content is not available due to your privacy preferences. Update your settings here, then reload the page to see it.

With the help of another NVIDIA AI tool, StyleGAN-NADA, it is possible to apply different styles to an object with text prompts. You can apply a burnt look to a car, convert a house model into a haunted house or, as a video demonstrating the technology suggests, apply tiger stripes to any animal.

The NVIDIA research team that created GET3D believes that future versions could be trained on real-world images instead of synthetic data. It may also be possible to train the model on different types of 3D shapes at once, rather than having to focus on one category of objects at any given time.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

NVIDIA's new AI model quickly generates objects and characters for virtual worlds

NVIDIA seeks to simplify the creation of virtual 3D worlds with a new model of artificial intelligence. GET3D can generate characters, buildings, vehicles and other types of 3D objects, says NVIDIA. The model should also be able to create shapes quickly. The company notes that GET3D can generate around 20 objects per second using a single GPU.

Researchers trained the model using synthetic 2D images of 3D shapes taken from multiple angles. NVIDIA claims it only took two days to feed around 1 million frames into GET3D using A100 Tensor Core GPUs.

The model can create objects with "high-fidelity textures and intricate geometric detail," NVIDIA's Isha Salian wrote in a blog post. The shapes made by GET3D "are in the form of a triangular mesh, like a papier-mâché model, covered with a textured material," Salian added.

Users should be able to quickly import the objects into game engines, 3D modelers and movie renderers for editing, as GET3D will create them in compatible formats. This means it could be much easier for developers to create dense virtual worlds for games and the metaverse. NVIDIA cited robotics and architecture as other use cases.

The company said that based on a set of car imaging data, GET3D was able to generate sedans, trucks, race cars, and vans. It can also produce foxes, rhinos, horses, and bears after being trained on animal pictures. As you'd expect, NVIDIA notes that the larger and more diverse the learning set provided to GET3D, "the more varied and detailed the output."

I

This content is not available due to your privacy preferences. Update your settings here, then reload the page to see it.

With the help of another NVIDIA AI tool, StyleGAN-NADA, it is possible to apply different styles to an object with text prompts. You can apply a burnt look to a car, convert a house model into a haunted house or, as a video demonstrating the technology suggests, apply tiger stripes to any animal.

The NVIDIA research team that created GET3D believes that future versions could be trained on real-world images instead of synthetic data. It may also be possible to train the model on different types of 3D shapes at once, rather than having to focus on one category of objects at any given time.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow