FYI: Python code that runs locally on Apple Silicon Macs (tested successfully on M1 Max with 64 GB RAM)

#17
by MaxJob - opened

I've just produced my very first AI image using ControlNet Canny and SDXL 1.0 on my MacBook Pro, using the sample code available on the Model card, and I'm already very impressed. Incredible creations coming ahead, woohoo!

Here are the couple tweaks you will need to make to the suggested Python code, to run the model locally on any Apple Silicon Mac with a sufficient amount of RAM (FYI this works great for me on Apple M1 Max with 64 GB RAM, but SDXL 1.0 is a no-go (for now) on Apple M1 Pro with only 16 GB RAM.)

Step 1: Delete the couple utterances of torch_dtype=torch.float16 in the code; ensure to also delete any redondant comma character
Step 2: Declare the processor to be used (i.e. the Apple Silicon processor) around the top of the code, by adding a line with: DEVICE='mps'
Step 3: Delete (or comment out) the following line, since it applies only to CUDA-compatible GPU (i.e. not Apple Silicon): pipe.enable_model_cpu_offload()
Step 4: Specify that you want to run the code on the DEVICE you declared (in step 2) by adding .to(DEVICE) after the closing parenthesis of the pipe = definition

The below code includes all aforementioned modifications. I hope this helps, at least those of you who are eager to give this ControlNet (or other) a try locally on their Apple Silicon Macs. Enjoy!

Make sure to first install the libraries: pip install accelerate transformers safetensors opencv-python diffusers


from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers.utils import load_image
from PIL import Image
import torch
import numpy as np
import cv2

DEVICE='mps'

prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting"
negative_prompt = 'low quality, bad quality, sketches'

image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png")

controlnet_conditioning_scale = 0.5 # recommended for good generalization

controlnet = ControlNetModel.from_pretrained(
"diffusers/controlnet-canny-sdxl-1.0"
)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix")
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
).to(DEVICE)

image = np.array(image)
image = cv2.Canny(image, 100, 200)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
image = Image.fromarray(image)

images = pipe(
prompt, negative_prompt=negative_prompt, image=image, controlnet_conditioning_scale=controlnet_conditioning_scale,
).images

images[0].save(f"hug_lab.png")


This is super useful - thanks a lot!

MaxJob changed discussion title from FYI: How to run the suggested python code locally on Apple Silicon (tested successfully on Apple M1 Max with 64 GB RAM) to FYI: Python code that runs locally on Apple Silicon Macs (tested successfully on M1 Max with 64 GB RAM)

Very very nice!

🧨Diffusers org

@MaxJob would you like to open pull request to this model card, improving the documentation around how to run on Apple silicon?

I think the community will benefit from it a lot!

It work fine on my M1max32GB as well but the generated images look bad... Is that the contronlnet for sdxl not good enough?

Sign up or log in to comment