Why advances in neural 3D rendering aren't reaching the market

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Over the past 10 years, neural networks have taken a giant leap forward from recognizing simple visual objects to creating consistent text and photorealistic 3D renderings. As computer graphics become more sophisticated, neural networks help automate an important part of the workflow. The market demands new efficient solutions to create 3D images to fill the hyperrealistic space of the metaverse.

But what technologies are we going to use to build this space, and will artificial intelligence help us?

Neural networks are emerging

Neural networks first hit the computer vision industry in September 2012, when the AlexNet convolutional neural network won the ImageNet large-scale visual recognition competition. AlexNet was found to be able to recognize, analyze and classify images. This revolutionary skill caused the wave of hype that AI art is still riding.

Next, a scientific paper titled Attention Is All You Need was published in 2017. The paper described the design and architecture of a "Transformer", a neural network created for natural language processing (NLP ). OpenAI proved the effectiveness of this architecture by creating GPT-3 in 2020. Many tech giants rushed to embark on a quest for a similar result and quality, and started forming networks of neurons based on Transformers.

Event

Next GamesBeat Summit 2022

Join gaming leaders live October 25-26 in San Francisco to examine the next big opportunities within the gaming industry.

register here

The ability to recognize images and objects and create coherent text from them led to the next logical step in the evolution of neural networks: turning text input into images. This initiated extensive research into text-to-image models. As a result, the first version of DALL-E - a breakthrough achievement in deep learning for 2D image generation - was created in January 2021.

From 2D to 3D

Shortly before DALL-E, another breakthrough allowed neural networks to begin creating 3D images with almost the same quality and speed as they were able to do in 2D. This became possible thanks to the Neural Radiance Fields (NeRF) method, which uses a neural network to recreate realistic 3D scenes from a collection of 2D images.

Classic CGI has long demanded a more economical and flexible solution for 3D scenes. For context, every scene in a video game is made up of millions of triangles, and it takes a lot of time, energy, and processing power to render them. Therefore, the game development and computer vision industries are always trying to strike a balance between the number of triangles (the lower the number, the faster they can be rendered) and the quality of the output.

Unlike conventional polygonal modeling, neural rendering reproduces a 3D scene based solely on optics and...

Why advances in neural 3D rendering aren't reaching the market

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Over the past 10 years, neural networks have taken a giant leap forward from recognizing simple visual objects to creating consistent text and photorealistic 3D renderings. As computer graphics become more sophisticated, neural networks help automate an important part of the workflow. The market demands new efficient solutions to create 3D images to fill the hyperrealistic space of the metaverse.

But what technologies are we going to use to build this space, and will artificial intelligence help us?

Neural networks are emerging

Neural networks first hit the computer vision industry in September 2012, when the AlexNet convolutional neural network won the ImageNet large-scale visual recognition competition. AlexNet was found to be able to recognize, analyze and classify images. This revolutionary skill caused the wave of hype that AI art is still riding.

Next, a scientific paper titled Attention Is All You Need was published in 2017. The paper described the design and architecture of a "Transformer", a neural network created for natural language processing (NLP ). OpenAI proved the effectiveness of this architecture by creating GPT-3 in 2020. Many tech giants rushed to embark on a quest for a similar result and quality, and started forming networks of neurons based on Transformers.

Event

Next GamesBeat Summit 2022

Join gaming leaders live October 25-26 in San Francisco to examine the next big opportunities within the gaming industry.

register here

The ability to recognize images and objects and create coherent text from them led to the next logical step in the evolution of neural networks: turning text input into images. This initiated extensive research into text-to-image models. As a result, the first version of DALL-E - a breakthrough achievement in deep learning for 2D image generation - was created in January 2021.

From 2D to 3D

Shortly before DALL-E, another breakthrough allowed neural networks to begin creating 3D images with almost the same quality and speed as they were able to do in 2D. This became possible thanks to the Neural Radiance Fields (NeRF) method, which uses a neural network to recreate realistic 3D scenes from a collection of 2D images.

Classic CGI has long demanded a more economical and flexible solution for 3D scenes. For context, every scene in a video game is made up of millions of triangles, and it takes a lot of time, energy, and processing power to render them. Therefore, the game development and computer vision industries are always trying to strike a balance between the number of triangles (the lower the number, the faster they can be rendered) and the quality of the output.

Unlike conventional polygonal modeling, neural rendering reproduces a 3D scene based solely on optics and...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow