Upload 9 files
Browse files- .gitattributes +1 -0
- README.md +146 -0
- config.json +41 -0
- diffusion_pytorch_model.bin +3 -0
- diffusion_pytorch_model.safetensors +3 -0
- images/bird.png +3 -0
- images/bird_canny.png +0 -0
- images/bird_canny_out.png +0 -0
- sd.png +0 -0
.gitattributes
CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
32 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
32 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
35 |
+
images/bird.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: openrail
|
3 |
+
base_model: runwayml/stable-diffusion-v1-5
|
4 |
+
tags:
|
5 |
+
- art
|
6 |
+
- controlnet
|
7 |
+
- stable-diffusion
|
8 |
+
- image-to-image
|
9 |
+
widget:
|
10 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/canny-edge.jpg
|
11 |
+
prompt: Girl with Pearl Earring
|
12 |
+
---
|
13 |
+
|
14 |
+
# Controlnet - *Canny Version*
|
15 |
+
|
16 |
+
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
|
17 |
+
This checkpoint corresponds to the ControlNet conditioned on **Canny edges**.
|
18 |
+
|
19 |
+
It can be used in combination with [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/text2img).
|
20 |
+
|
21 |
+
![img](./sd.png)
|
22 |
+
|
23 |
+
## Model Details
|
24 |
+
- **Developed by:** Lvmin Zhang, Maneesh Agrawala
|
25 |
+
- **Model type:** Diffusion-based text-to-image generation model
|
26 |
+
- **Language(s):** English
|
27 |
+
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
|
28 |
+
- **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543).
|
29 |
+
- **Cite as:**
|
30 |
+
|
31 |
+
@misc{zhang2023adding,
|
32 |
+
title={Adding Conditional Control to Text-to-Image Diffusion Models},
|
33 |
+
author={Lvmin Zhang and Maneesh Agrawala},
|
34 |
+
year={2023},
|
35 |
+
eprint={2302.05543},
|
36 |
+
archivePrefix={arXiv},
|
37 |
+
primaryClass={cs.CV}
|
38 |
+
}
|
39 |
+
|
40 |
+
## Introduction
|
41 |
+
|
42 |
+
Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by
|
43 |
+
Lvmin Zhang, Maneesh Agrawala.
|
44 |
+
|
45 |
+
The abstract reads as follows:
|
46 |
+
|
47 |
+
*We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions.
|
48 |
+
The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k).
|
49 |
+
Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices.
|
50 |
+
Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data.
|
51 |
+
We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc.
|
52 |
+
This may enrich the methods to control large diffusion models and further facilitate related applications.*
|
53 |
+
|
54 |
+
## Released Checkpoints
|
55 |
+
|
56 |
+
The authors released 8 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
57 |
+
on a different type of conditioning:
|
58 |
+
|
59 |
+
| Model Name | Control Image Overview| Control Image Example | Generated Image Example |
|
60 |
+
|---|---|---|---|
|
61 |
+
|[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
|
62 |
+
|[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
|
63 |
+
|[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
|
64 |
+
|[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
|
65 |
+
|[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
|
66 |
+
|[lllyasviel/sd-controlnet_openpose](https://huggingface.co/lllyasviel/sd-controlnet-openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
|
67 |
+
|[lllyasviel/sd-controlnet_scribble](https://huggingface.co/lllyasviel/sd-controlnet-scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
|
68 |
+
|[lllyasviel/sd-controlnet_seg](https://huggingface.co/lllyasviel/sd-controlnet-seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
|
69 |
+
|
70 |
+
|
71 |
+
## Example
|
72 |
+
|
73 |
+
It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint
|
74 |
+
has been trained on it.
|
75 |
+
Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.
|
76 |
+
|
77 |
+
**Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below:
|
78 |
+
|
79 |
+
1. Install opencv
|
80 |
+
|
81 |
+
```sh
|
82 |
+
$ pip install opencv-contrib-python
|
83 |
+
```
|
84 |
+
|
85 |
+
2. Let's install `diffusers` and related packages:
|
86 |
+
|
87 |
+
```
|
88 |
+
$ pip install diffusers transformers accelerate
|
89 |
+
```
|
90 |
+
|
91 |
+
3. Run code:
|
92 |
+
|
93 |
+
```python
|
94 |
+
import cv2
|
95 |
+
from PIL import Image
|
96 |
+
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
|
97 |
+
import torch
|
98 |
+
import numpy as np
|
99 |
+
from diffusers.utils import load_image
|
100 |
+
|
101 |
+
image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-hed/resolve/main/images/bird.png")
|
102 |
+
image = np.array(image)
|
103 |
+
|
104 |
+
low_threshold = 100
|
105 |
+
high_threshold = 200
|
106 |
+
|
107 |
+
image = cv2.Canny(image, low_threshold, high_threshold)
|
108 |
+
image = image[:, :, None]
|
109 |
+
image = np.concatenate([image, image, image], axis=2)
|
110 |
+
image = Image.fromarray(image)
|
111 |
+
|
112 |
+
controlnet = ControlNetModel.from_pretrained(
|
113 |
+
"lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16
|
114 |
+
)
|
115 |
+
|
116 |
+
pipe = StableDiffusionControlNetPipeline.from_pretrained(
|
117 |
+
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16
|
118 |
+
)
|
119 |
+
|
120 |
+
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
121 |
+
|
122 |
+
# Remove if you do not have xformers installed
|
123 |
+
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
|
124 |
+
# for installation instructions
|
125 |
+
pipe.enable_xformers_memory_efficient_attention()
|
126 |
+
|
127 |
+
pipe.enable_model_cpu_offload()
|
128 |
+
|
129 |
+
image = pipe("bird", image, num_inference_steps=20).images[0]
|
130 |
+
|
131 |
+
image.save('images/bird_canny_out.png')
|
132 |
+
```
|
133 |
+
|
134 |
+
![bird](./images/bird.png)
|
135 |
+
|
136 |
+
![bird_canny](./images/bird_canny.png)
|
137 |
+
|
138 |
+
![bird_canny_out](./images/bird_canny_out.png)
|
139 |
+
|
140 |
+
### Training
|
141 |
+
|
142 |
+
The canny edge model was trained on 3M edge-image, caption pairs. The model was trained for 600 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model.
|
143 |
+
|
144 |
+
### Blog post
|
145 |
+
|
146 |
+
For more information, please also have a look at the [official ControlNet Blog Post](https://huggingface.co/blog/controlnet).
|
config.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_class_name": "ControlNetModel",
|
3 |
+
"_diffusers_version": "0.14.0.dev0",
|
4 |
+
"act_fn": "silu",
|
5 |
+
"attention_head_dim": 8,
|
6 |
+
"block_out_channels": [
|
7 |
+
320,
|
8 |
+
640,
|
9 |
+
1280,
|
10 |
+
1280
|
11 |
+
],
|
12 |
+
"class_embed_type": null,
|
13 |
+
"conditioning_embedding_out_channels": [
|
14 |
+
16,
|
15 |
+
32,
|
16 |
+
96,
|
17 |
+
256
|
18 |
+
],
|
19 |
+
"controlnet_conditioning_channel_order": "rgb",
|
20 |
+
"cross_attention_dim": 768,
|
21 |
+
"down_block_types": [
|
22 |
+
"CrossAttnDownBlock2D",
|
23 |
+
"CrossAttnDownBlock2D",
|
24 |
+
"CrossAttnDownBlock2D",
|
25 |
+
"DownBlock2D"
|
26 |
+
],
|
27 |
+
"downsample_padding": 1,
|
28 |
+
"flip_sin_to_cos": true,
|
29 |
+
"freq_shift": 0,
|
30 |
+
"in_channels": 4,
|
31 |
+
"layers_per_block": 2,
|
32 |
+
"mid_block_scale_factor": 1,
|
33 |
+
"norm_eps": 1e-05,
|
34 |
+
"norm_num_groups": 32,
|
35 |
+
"num_class_embeds": null,
|
36 |
+
"only_cross_attention": false,
|
37 |
+
"projection_class_embeddings_input_dim": null,
|
38 |
+
"resnet_time_scale_shift": "default",
|
39 |
+
"upcast_attention": false,
|
40 |
+
"use_linear_projection": false
|
41 |
+
}
|
diffusion_pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e5220c53e942eb3bbe08a92e58e7ea789be44590172103d757dba29d732772b2
|
3 |
+
size 1445254969
|
diffusion_pytorch_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e19821a00e6d1817b37286a21d5c4f8915076949b0e81846c4f92c96ffb46db7
|
3 |
+
size 1445157124
|
images/bird.png
ADDED
Git LFS Details
|
images/bird_canny.png
ADDED
images/bird_canny_out.png
ADDED
sd.png
ADDED