Apple halves its AI image synthesis with new Stable Diffusion patch

Two examples of Illustrations generated by Stable Diffusion provided by Apple.Zoom / Two examples of illustrations generated by Stable Diffusion provided by Apple. Apple

On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image builder to run on Apple Silicon using Core ML, Apple's proprietary framework for machine learning models . The optimizations will allow application developers to use Apple Neural Engine hardware to run Stable Diffusion approximately twice as fast as previous Mac methods.

Stable Diffusion (SD), launched in August, is an open-source AI image synthesis model that generates new images using text input. For example, typing "astronaut on a dragon" in SD will usually create an image of exactly that.

By releasing the new SD optimizations, available as conversion scripts on GitHub, Apple wants to unlock the full potential of image synthesis on its devices, which it notes on Apple's announcement page Research. "With the growing number of apps from Stable Diffusion, it's important to ensure that developers can effectively leverage this technology to create apps that creatives everywhere can use."

Apple also mentions privacy and avoiding cloud computing costs as benefits of running an AI build model locally on a Mac or Apple device.

"End-user privacy is protected because all data provided by the user as input to the model remains on the user's device," says Apple. "Secondly, after the initial download, users do not need an internet connection to use the model. Finally, local deployment of this model allows developers to reduce or eliminate their server costs."< /p>

Currently, Stable Diffusion renders frames faster on high-end Nvidia GPUs when run locally on a Windows or Linux PC. For example, generating a 512x512 frame at 50 steps on an RTX 3060 takes about 8.7 seconds on our machine.

In comparison, the conventional method of running Stable Diffusion on an Apple Silicon Mac is much slower, taking around 69.8 seconds to generate a 512x512 image at 50 steps using Diffusion Bee in our tests on a Mac Mini M1.

According to Apple's benchmarks on GitHub, Apple's new Core ML SD optimizations can generate a 512x512 image in 50 steps on an M1 chip in 35 seconds. An M2 does the job in 23 seconds, and Apple's most powerful silicon chip, the M1 Ultra, can achieve the same result in just nine seconds. It's a dramatic improvement, cutting the build time almost in half in the case of the M1.

Apple's GitHub release is a Python package that converts Stable Diffusion models from PyTorch to Core ML and includes a Swift package for model deployment. The optimizations work for Stable Diffusion 1.4, 1.5 and the new version 2.0.

At the moment the experience of setting up Stable Diffusion with Core ML locally on a Mac is for developers and requires basic command line skills, but Hugging Face has published a detailed guide on setting up optimizations Apple's Core ML for those who want to experiment.

For those less technically inclined, the previously mentioned app called Diffusion Bee makes it easier to run Stable Diffusion on Apple Silicon, but it doesn't yet incorporate Apple's new optimizations. A...

Apple halves its AI image synthesis with new Stable Diffusion patch
Two examples of Illustrations generated by Stable Diffusion provided by Apple.Zoom / Two examples of illustrations generated by Stable Diffusion provided by Apple. Apple

On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image builder to run on Apple Silicon using Core ML, Apple's proprietary framework for machine learning models . The optimizations will allow application developers to use Apple Neural Engine hardware to run Stable Diffusion approximately twice as fast as previous Mac methods.

Stable Diffusion (SD), launched in August, is an open-source AI image synthesis model that generates new images using text input. For example, typing "astronaut on a dragon" in SD will usually create an image of exactly that.

By releasing the new SD optimizations, available as conversion scripts on GitHub, Apple wants to unlock the full potential of image synthesis on its devices, which it notes on Apple's announcement page Research. "With the growing number of apps from Stable Diffusion, it's important to ensure that developers can effectively leverage this technology to create apps that creatives everywhere can use."

Apple also mentions privacy and avoiding cloud computing costs as benefits of running an AI build model locally on a Mac or Apple device.

"End-user privacy is protected because all data provided by the user as input to the model remains on the user's device," says Apple. "Secondly, after the initial download, users do not need an internet connection to use the model. Finally, local deployment of this model allows developers to reduce or eliminate their server costs."< /p>

Currently, Stable Diffusion renders frames faster on high-end Nvidia GPUs when run locally on a Windows or Linux PC. For example, generating a 512x512 frame at 50 steps on an RTX 3060 takes about 8.7 seconds on our machine.

In comparison, the conventional method of running Stable Diffusion on an Apple Silicon Mac is much slower, taking around 69.8 seconds to generate a 512x512 image at 50 steps using Diffusion Bee in our tests on a Mac Mini M1.

According to Apple's benchmarks on GitHub, Apple's new Core ML SD optimizations can generate a 512x512 image in 50 steps on an M1 chip in 35 seconds. An M2 does the job in 23 seconds, and Apple's most powerful silicon chip, the M1 Ultra, can achieve the same result in just nine seconds. It's a dramatic improvement, cutting the build time almost in half in the case of the M1.

Apple's GitHub release is a Python package that converts Stable Diffusion models from PyTorch to Core ML and includes a Swift package for model deployment. The optimizations work for Stable Diffusion 1.4, 1.5 and the new version 2.0.

At the moment the experience of setting up Stable Diffusion with Core ML locally on a Mac is for developers and requires basic command line skills, but Hugging Face has published a detailed guide on setting up optimizations Apple's Core ML for those who want to experiment.

For those less technically inclined, the previously mentioned app called Diffusion Bee makes it easier to run Stable Diffusion on Apple Silicon, but it doesn't yet incorporate Apple's new optimizations. A...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow