Apple cuts AI image compositing times in half with a Stable Diffusion fix

Zoom in / Two examples of artwork created by Stable Diffusion provided by Apple.

apple

On Wednesday, Apple released improvements that allow the Stable Diffusion AI image generator to run on Apple Silicon using Core ML, Apple’s framework for machine learning models. The improvements will allow app developers using the Apple Neural Engine hardware to run Stable Diffusion up to two times faster than previous Mac-based methods.

Stable Diffusion (SD), which launched in August, is an open-source AI image overlay model that generates new images using text input. For example, writing “astronaut on a dragon” into an SD card will create an image of exactly that.

By releasing the new SD optimizations — available as conversion scripts on GitHub — Apple wants to unlock the full potential of image compositing on its devices, which it notes on the Apple Research announcement page. “With the number of Stable Diffusion applications growing, ensuring that developers can effectively leverage this technology is critical to creating applications that creators everywhere can use.”

Apple also mentions privacy and avoidance of cloud computing costs as benefits of running the AI ​​generation model locally on a Mac or Apple device.

“The privacy of the end user is protected because any data the user provides as input to the form remains on the user’s device,” says Apple. “Second, after the initial download, users do not need an Internet connection to use the template. Finally, deploying this template on-premises allows developers to reduce or eliminate server-related costs.”

Currently, Stable Diffusion creates faster images on high-end Nvidia GPUs when running natively on a Windows or Linux PC. For example, generating a 512 x 512 image at 50 steps on an RTX 3060 takes about 8.7 seconds on our machine.

By comparison, the traditional method of running Stable Diffusion on an Apple Silicon Mac is much slower, taking about 69.8 seconds to generate a 512 x 512 image at 50 steps using the Diffusion Bee in our tests on the M1 Mac Mini.

According to Apple’s benchmarks on GitHub, Apple’s new Core ML SD enhancements can generate a 512 x 512 50-step image on an M1 chip in 35 seconds. The M2 does the job in 23 seconds, and Apple’s most powerful silicon chip, the M1 Ultra, can achieve the same result in just nine seconds. This is a huge improvement, cutting generation time almost in half in the case of the M1.

Apple’s GitHub release is a Python package that converts Stable Diffusion models from PyTorch to Core ML and includes the Swift package for model deployment. The improvements work for Stable Diffusion 1.4, 1.5 and the newly released 2.0.

For now, trying out setting up Stable Diffusion with Core ML natively on a Mac is aimed at developers and requires some basic command-line skills, but Hugging Face has published an in-depth guide to setting up Apple Core ML optimizations for those who want to experiment.

For those less technically inclined, the previously mentioned app called Diffusion Bee makes it easy to run Stable Diffusion on Apple Silicon, but it doesn’t integrate Apple’s new improvements yet. You can also play Stable Diffusion on your iPhone or iPad using the Draw Things app.

#Apple #cuts #image #compositing #times #Stable #Diffusion #fix

Leave a Reply

Your email address will not be published. Required fields are marked *