Spaces:
Running
on
L4
Running
on
L4
<!--Copyright 2023 The HuggingFace Team. All rights reserved. | |
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
the License. You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
specific language governing permissions and limitations under the License. | |
--> | |
# Text-to-Image Generation with ControlNet Conditioning | |
## Overview | |
[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala. | |
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. | |
The abstract of the paper is the following: | |
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.* | |
This model was contributed by the amazing community contributor [takuma104](https://huggingface.co/takuma104) ❤️ . | |
Resources: | |
* [Paper](https://arxiv.org/abs/2302.05543) | |
* [Original Code](https://github.com/lllyasviel/ControlNet) | |
## Available Pipelines: | |
| Pipeline | Tasks | Demo | |
|---|---|:---:| | |
| [StableDiffusionControlNetPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py) | *Text-to-Image Generation with ControlNet Conditioning* | [Colab Example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb) | |
## Usage example | |
In the following we give a simple example of how to use a *ControlNet* checkpoint with Diffusers for inference. | |
The inference pipeline is the same for all pipelines: | |
* 1. Take an image and run it through a pre-conditioning processor. | |
* 2. Run the pre-processed image through the [`StableDiffusionControlNetPipeline`]. | |
Let's have a look at a simple example using the [Canny Edge ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-canny). | |
```python | |
from diffusers import StableDiffusionControlNetPipeline | |
from diffusers.utils import load_image | |
# Let's load the popular vermeer image | |
image = load_image( | |
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" | |
) | |
``` | |
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png) | |
Next, we process the image to get the canny image. This is step *1.* - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the [official checkpoints](#controlnet-with-stable-diffusion-1.5) for more information about other models. | |
First, we need to install opencv: | |
``` | |
pip install opencv-contrib-python | |
``` | |
Next, let's also install all required Hugging Face libraries: | |
``` | |
pip install diffusers transformers git+https://github.com/huggingface/accelerate.git | |
``` | |
Then we can retrieve the canny edges of the image. | |
```python | |
import cv2 | |
from PIL import Image | |
import numpy as np | |
image = np.array(image) | |
low_threshold = 100 | |
high_threshold = 200 | |
image = cv2.Canny(image, low_threshold, high_threshold) | |
image = image[:, :, None] | |
image = np.concatenate([image, image, image], axis=2) | |
canny_image = Image.fromarray(image) | |
``` | |
Let's take a look at the processed image. | |
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png) | |
Now, we load the official [Stable Diffusion 1.5 Model](runwayml/stable-diffusion-v1-5) as well as the ControlNet for canny edges. | |
```py | |
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel | |
import torch | |
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) | |
pipe = StableDiffusionControlNetPipeline.from_pretrained( | |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 | |
) | |
``` | |
To speed-up things and reduce memory, let's enable model offloading and use the fast [`UniPCMultistepScheduler`]. | |
```py | |
from diffusers import UniPCMultistepScheduler | |
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) | |
# this command loads the individual model components on GPU on-demand. | |
pipe.enable_model_cpu_offload() | |
``` | |
Finally, we can run the pipeline: | |
```py | |
generator = torch.manual_seed(0) | |
out_image = pipe( | |
"disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image | |
).images[0] | |
``` | |
This should take only around 3-4 seconds on GPU (depending on hardware). The output image then looks as follows: | |
![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_disco_dancing.png) | |
**Note**: To see how to run all other ControlNet checkpoints, please have a look at [ControlNet with Stable Diffusion 1.5](#controlnet-with-stable-diffusion-1.5). | |
<!-- TODO: add space --> | |
## Combining multiple conditionings | |
Multiple ControlNet conditionings can be combined for a single image generation. Pass a list of ControlNets to the pipeline's constructor and a corresponding list of conditionings to `__call__`. | |
When combining conditionings, it is helpful to mask conditionings such that they do not overlap. In the example, we mask the middle of the canny map where the pose conditioning is located. | |
It can also be helpful to vary the `controlnet_conditioning_scales` to emphasize one conditioning over the other. | |
### Canny conditioning | |
The original image: | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"/> | |
Prepare the conditioning: | |
```python | |
from diffusers.utils import load_image | |
from PIL import Image | |
import cv2 | |
import numpy as np | |
from diffusers.utils import load_image | |
canny_image = load_image( | |
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" | |
) | |
canny_image = np.array(canny_image) | |
low_threshold = 100 | |
high_threshold = 200 | |
canny_image = cv2.Canny(canny_image, low_threshold, high_threshold) | |
# zero out middle columns of image where pose will be overlayed | |
zero_start = canny_image.shape[1] // 4 | |
zero_end = zero_start + canny_image.shape[1] // 2 | |
canny_image[:, zero_start:zero_end] = 0 | |
canny_image = canny_image[:, :, None] | |
canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2) | |
canny_image = Image.fromarray(canny_image) | |
``` | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/landscape_canny_masked.png"/> | |
### Openpose conditioning | |
The original image: | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" width=600/> | |
Prepare the conditioning: | |
```python | |
from controlnet_aux import OpenposeDetector | |
from diffusers.utils import load_image | |
openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") | |
openpose_image = load_image( | |
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" | |
) | |
openpose_image = openpose(openpose_image) | |
``` | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/person_pose.png" width=600/> | |
### Running ControlNet with multiple conditionings | |
```python | |
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler | |
import torch | |
controlnet = [ | |
ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16), | |
ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16), | |
] | |
pipe = StableDiffusionControlNetPipeline.from_pretrained( | |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 | |
) | |
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) | |
pipe.enable_xformers_memory_efficient_attention() | |
pipe.enable_model_cpu_offload() | |
prompt = "a giant standing in a fantasy landscape, best quality" | |
negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" | |
generator = torch.Generator(device="cpu").manual_seed(1) | |
images = [openpose_image, canny_image] | |
image = pipe( | |
prompt, | |
images, | |
num_inference_steps=20, | |
generator=generator, | |
negative_prompt=negative_prompt, | |
controlnet_conditioning_scale=[1.0, 0.8], | |
).images[0] | |
image.save("./multi_controlnet_output.png") | |
``` | |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/multi_controlnet_output.png" width=600/> | |
## Available checkpoints | |
ControlNet requires a *control image* in addition to the text-to-image *prompt*. | |
Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more. | |
All checkpoints can be found under the authors' namespace [lllyasviel](https://huggingface.co/lllyasviel). | |
### ControlNet with Stable Diffusion 1.5 | |
| Model Name | Control Image Overview| Control Image Example | Generated Image Example | | |
|---|---|---|---| | |
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>| | |
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>| | |
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> | | |
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>| | |
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>| | |
|[lllyasviel/sd-controlnet-openpose](https://huggingface.co/lllyasviel/sd-controlnet_openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>| | |
|[lllyasviel/sd-controlnet-scribble](https://huggingface.co/lllyasviel/sd-controlnet_scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> | | |
|[lllyasviel/sd-controlnet-seg](https://huggingface.co/lllyasviel/sd-controlnet_seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> | | |
## StableDiffusionControlNetPipeline | |
[[autodoc]] StableDiffusionControlNetPipeline | |
- all | |
- __call__ | |
- enable_attention_slicing | |
- disable_attention_slicing | |
- enable_vae_slicing | |
- disable_vae_slicing | |
- enable_xformers_memory_efficient_attention | |
- disable_xformers_memory_efficient_attention | |
## FlaxStableDiffusionControlNetPipeline | |
[[autodoc]] FlaxStableDiffusionControlNetPipeline | |
- all | |
- __call__ | |