# Distilled Stable Diffusion inference [[open-in-colab]] Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a *distilled* version of the Stable Diffusion model from [Nota AI](https://huggingface.co/nota-ai). The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this [blog post](https://huggingface.co/blog/sd_distillation) to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: ```py from diffusers import StableDiffusionPipeline import torch distilled = StableDiffusionPipeline.from_pretrained( "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") original = StableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") ``` Given a prompt, get the inference time for the original model: ```py import time seed = 2023 generator = torch.manual_seed(seed) NUM_ITERS_TO_RUN = 3 NUM_INFERENCE_STEPS = 25 NUM_IMAGES_PER_PROMPT = 4 prompt = "a golden vase with different flowers" start = time.time_ns() for _ in range(NUM_ITERS_TO_RUN): images = original( prompt, num_inference_steps=NUM_INFERENCE_STEPS, generator=generator, num_images_per_prompt=NUM_IMAGES_PER_PROMPT ).images end = time.time_ns() original_sd = f"{(end - start) / 1e6:.1f}" print(f"Execution time -- {original_sd} ms\n") "Execution time -- 45781.5 ms" ``` Time the distilled model inference: ```py start = time.time_ns() for _ in range(NUM_ITERS_TO_RUN): images = distilled( prompt, num_inference_steps=NUM_INFERENCE_STEPS, generator=generator, num_images_per_prompt=NUM_IMAGES_PER_PROMPT ).images end = time.time_ns() distilled_sd = f"{(end - start) / 1e6:.1f}" print(f"Execution time -- {distilled_sd} ms\n") "Execution time -- 29884.2 ms" ```
original Stable Diffusion (45781.5 ms)
distilled Stable Diffusion (29884.2 ms)
## Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the [Stable Diffusion VAE](https://huggingface.co/sayakpaul/taesdxl-diffusers) to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: ```py from diffusers import AutoencoderTiny distilled.vae = AutoencoderTiny.from_pretrained( "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, ).to("cuda") ``` Time the distilled model and distilled VAE inference: ```py start = time.time_ns() for _ in range(NUM_ITERS_TO_RUN): images = distilled( prompt, num_inference_steps=NUM_INFERENCE_STEPS, generator=generator, num_images_per_prompt=NUM_IMAGES_PER_PROMPT ).images end = time.time_ns() distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" print(f"Execution time -- {distilled_tiny_sd} ms\n") "Execution time -- 27165.7 ms" ```
distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms)