VFX Artists Show Hollywood Can Use AI to Create, Not Exploit

Hollywood may be embroiled in ongoing labor disputes involving AI, but the technology infiltrated film and television a long, long time ago. At SIGGRAPH in Los Angeles, algorithmic and generative tools have been featured in countless talks and announcements. We may not yet know where GPT-4 and Stable Diffusion fit in, but the creative side of production is ready to embrace them - if it can be done in a way that increases rather than replacing artists.

SIGGRAPH is not a conference on film and television production, but on computer graphics and visual effects (for 50 years now!), and the topics naturally overlap more and more in recent years.

This year, the elephant in the room was the strike, and few presentations or discussions were dedicated to it; however, at afterparties and networking events, it was more or less the first thing people talked about. Despite this, SIGGRAPH is primarily a conference aimed at bringing technical and creative minds together, and the vibe I felt was "it sucks, but in the meantime, we can keep improving our craft".

>

The fears around AI in production are, if not illusory, but certainly a little misleading. Generative AI, such as image and text models, has improved dramatically, raising fears that it will replace writers and artists. And certainly studio executives have floated nefarious – and unrealistic – hopes of partly replacing writers and actors with AI tools. But AI has been around in film and TV for quite some time, performing important, artist-driven tasks.

I've seen this in many panels, technical paper presentations and interviews. Sure, a history of AI in visual effects would be interesting, but for now, here are a few ways that AI in its various guises has been showcased at the forefront of effects and production work.

Pixar artists leverage ML and simulations

An early example came from two Pixar presentations on the animation techniques used in their latest film, Elemental. The characters in this film are more abstract than others, and the prospect of creating a person made of fire, water or air is not easy. Imagine facing the fractal complexity of these substances in a body that can act and express itself clearly while looking "real".

As the animators and effects coordinators explained in turn, procedural generation was at the heart of the process, simulating and setting up the flames, waves, or vapors that made up dozens of characters. Sculpting and animating every little bit of flame or cloud that escapes a character by hand has never been an option: it would be extremely tedious, laborious and technical work rather than creative.

But as the presentations made clear, even though they relied heavily on simulations and sophisticated material shaders to create the desired effects, the art team and the process were deeply tied to the engineering. (They have also collaborated on this with researchers from ETH Zurich.)

An example is the general appearance of one of the main characters, Ember, which is made of flames. It was not enough to simulate flames, change colors or adjust the many dials to affect the result. Ultimately, the flames needed to reflect the look the artist wanted, not just how the flames appear in real life. To this end, they used "Volume Neural Style Transfer" or NST; Style transfer is a machine learning technique that most will have experienced, for example by modifying a selfie in the style of Edvard Munch or similar.

In this case, the team took the raw voxels from the "pyro simulation" or generated flames, and ran them through a style transfer network formed on an artist's expression of this what she wanted the character's flames to look like: more stylized, less simulated. The resulting voxels have the natural and unpredictable look of a simulation, but also the incomparable casting chosen by the artist.

Simplified example of NST in action adding flair to Ember's flames. Image credits: Pixar

Of course, the animators are...

VFX Artists Show Hollywood Can Use AI to Create, Not Exploit

Hollywood may be embroiled in ongoing labor disputes involving AI, but the technology infiltrated film and television a long, long time ago. At SIGGRAPH in Los Angeles, algorithmic and generative tools have been featured in countless talks and announcements. We may not yet know where GPT-4 and Stable Diffusion fit in, but the creative side of production is ready to embrace them - if it can be done in a way that increases rather than replacing artists.

SIGGRAPH is not a conference on film and television production, but on computer graphics and visual effects (for 50 years now!), and the topics naturally overlap more and more in recent years.

This year, the elephant in the room was the strike, and few presentations or discussions were dedicated to it; however, at afterparties and networking events, it was more or less the first thing people talked about. Despite this, SIGGRAPH is primarily a conference aimed at bringing technical and creative minds together, and the vibe I felt was "it sucks, but in the meantime, we can keep improving our craft".

>

The fears around AI in production are, if not illusory, but certainly a little misleading. Generative AI, such as image and text models, has improved dramatically, raising fears that it will replace writers and artists. And certainly studio executives have floated nefarious – and unrealistic – hopes of partly replacing writers and actors with AI tools. But AI has been around in film and TV for quite some time, performing important, artist-driven tasks.

I've seen this in many panels, technical paper presentations and interviews. Sure, a history of AI in visual effects would be interesting, but for now, here are a few ways that AI in its various guises has been showcased at the forefront of effects and production work.

Pixar artists leverage ML and simulations

An early example came from two Pixar presentations on the animation techniques used in their latest film, Elemental. The characters in this film are more abstract than others, and the prospect of creating a person made of fire, water or air is not easy. Imagine facing the fractal complexity of these substances in a body that can act and express itself clearly while looking "real".

As the animators and effects coordinators explained in turn, procedural generation was at the heart of the process, simulating and setting up the flames, waves, or vapors that made up dozens of characters. Sculpting and animating every little bit of flame or cloud that escapes a character by hand has never been an option: it would be extremely tedious, laborious and technical work rather than creative.

But as the presentations made clear, even though they relied heavily on simulations and sophisticated material shaders to create the desired effects, the art team and the process were deeply tied to the engineering. (They have also collaborated on this with researchers from ETH Zurich.)

An example is the general appearance of one of the main characters, Ember, which is made of flames. It was not enough to simulate flames, change colors or adjust the many dials to affect the result. Ultimately, the flames needed to reflect the look the artist wanted, not just how the flames appear in real life. To this end, they used "Volume Neural Style Transfer" or NST; Style transfer is a machine learning technique that most will have experienced, for example by modifying a selfie in the style of Edvard Munch or similar.

In this case, the team took the raw voxels from the "pyro simulation" or generated flames, and ran them through a style transfer network formed on an artist's expression of this what she wanted the character's flames to look like: more stylized, less simulated. The resulting voxels have the natural and unpredictable look of a simulation, but also the incomparable casting chosen by the artist.

Simplified example of NST in action adding flair to Ember's flames. Image credits: Pixar

Of course, the animators are...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow