wukong1688 commited on
Commit
34c4563
1 Parent(s): e78a8c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -44
README.md CHANGED
@@ -1,88 +1,89 @@
1
  ---
2
- license: openrail
 
 
3
  ---
4
 
5
- This is the pretrained weights and some other detector weights of ControlNet.
6
 
7
- See also: https://github.com/lllyasviel/ControlNet
8
 
9
- # Description of Files
10
 
11
- ControlNet/models/control_sd15_canny.pth
12
 
13
- - The ControlNet+SD1.5 model to control SD using canny edge detection.
14
 
15
- ControlNet/models/control_sd15_depth.pth
16
 
17
- - The ControlNet+SD1.5 model to control SD using Midas depth estimation.
18
 
19
- ControlNet/models/control_sd15_hed.pth
20
 
21
- - The ControlNet+SD1.5 model to control SD using HED edge detection (soft edge).
22
 
23
- ControlNet/models/control_sd15_mlsd.pth
24
 
25
- - The ControlNet+SD1.5 model to control SD using M-LSD line detection (will also work with traditional Hough transform).
26
 
27
- ControlNet/models/control_sd15_normal.pth
28
 
29
- - The ControlNet+SD1.5 model to control SD using normal map. Best to use the normal map generated by that Gradio app. Other normal maps may also work as long as the direction is correct (left looks red, right looks blue, up looks green, down looks purple).
30
 
31
- ControlNet/models/control_sd15_openpose.pth
32
 
33
- - The ControlNet+SD1.5 model to control SD using OpenPose pose detection. Directly manipulating pose skeleton should also work.
34
 
35
- ControlNet/models/control_sd15_scribble.pth
36
 
37
- - The ControlNet+SD1.5 model to control SD using human scribbles. The model is trained with boundary edges with very strong data augmentation to simulate boundary lines similar to that drawn by human.
38
 
39
- ControlNet/models/control_sd15_seg.pth
40
 
41
- - The ControlNet+SD1.5 model to control SD using semantic segmentation. The protocol is ADE20k.
42
 
43
- ControlNet/annotator/ckpts/body_pose_model.pth
44
 
45
- - Third-party model: Openpose’s pose detection model.
46
 
47
- ControlNet/annotator/ckpts/hand_pose_model.pth
48
 
49
- - Third-party model: Openpose’s hand detection model.
50
 
51
- ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt
52
 
53
- - Third-party model: Midas depth estimation model.
54
 
55
- ControlNet/annotator/ckpts/mlsd_large_512_fp32.pth
56
 
57
- - Third-party model: M-LSD detection model.
58
 
59
- ControlNet/annotator/ckpts/mlsd_tiny_512_fp32.pth
60
 
61
- - Third-party model: M-LSD’s another smaller detection model (we do not use this one).
62
 
63
- ControlNet/annotator/ckpts/network-bsds500.pth
64
 
65
- - Third-party model: HED boundary detection.
66
 
67
- ControlNet/annotator/ckpts/upernet_global_small.pth
68
 
69
- - Third-party model: Uniformer semantic segmentation.
70
 
71
- ControlNet/training/fill50k.zip
72
 
73
- - The data for our training tutorial.
74
 
75
- # Related Resources
76
 
77
- Special Thank to the great project - [Mikubill' A1111 Webui Plugin](https://github.com/Mikubill/sd-webui-controlnet) !
78
 
79
- We also thank Hysts for making [Gradio](https://github.com/gradio-app/gradio) demo in [Hugging Face Space](https://huggingface.co/spaces/hysts/ControlNet) as well as more than 65 models in that amazing [Colab list](https://github.com/camenduru/controlnet-colab)!
80
 
81
- Thank haofanwang for making [ControlNet-for-Diffusers](https://github.com/haofanwang/ControlNet-for-Diffusers)!
82
 
83
- We also thank all authors for making Controlnet DEMOs, including but not limited to [fffiloni](https://huggingface.co/spaces/fffiloni/ControlNet-Video), [other-model](https://huggingface.co/spaces/hysts/ControlNet-with-other-models), [ThereforeGames](https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/7784), [RamAnanth1](https://huggingface.co/spaces/RamAnanth1/ControlNet), etc!
84
 
85
- # Misuse, Malicious Use, and Out-of-Scope Use
86
-
87
- The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
88
 
 
 
1
  ---
2
+ 许可证: 开放式轨道
3
+ base_model:
4
+ - black-forest-labs/FLUX.1-dev
5
  ---
6
 
7
+ 这是ControlNet的预训练权重和其他一些检测器权重。
8
 
9
+ 另见:https://github.com/lllyasviel/ControlNet
10
 
11
+ #文件描述
12
 
13
+ ControlNet/models/control _ sd15 _ canny . PTH
14
 
15
+ -使用canny边缘检测控制标清的ControlNet+SD1.5模型。
16
 
17
+ control net/models/control _ sd15 _ depth . PTH
18
 
19
+ -使用Midas深度估计控制SD的ControlNet+SD1.5模型。
20
 
21
+ ControlNet/models/control _ sd15 _ hed . PTH
22
 
23
+ -ControlNet+SD1.5模型使用HED边缘检测(软边缘)来控制SD。
24
 
25
+ ControlNet/models/control _ sd15 _ mlsd . PTH
26
 
27
+ -ControlNet+SD1.5模型使用M-LSD线路检测来控制标清(也适用于传统的Hough变换)
28
 
29
+ ControlNet/models/control _ sd15 _ normal . PTH
30
 
31
+ -使用法线映射控制SD的ControlNet+SD1.5模型。最好使用Gradio应用程序生成的法线贴图。其他法线贴图只要方向正确也可能行得通(左看红,右看蓝,上看绿,下看紫)
32
 
33
+ control net/models/control _ sd15 _ open pose . PTH
34
 
35
+ -使用OpenPose姿态检测控制SD的ControlNet+SD1.5模型。直接操纵姿势骨骼应该也可以。
36
 
37
+ control net/models/control _ sd15 _ scribble . PTH
38
 
39
+ -ControlNet+SD1.5模型使用人工涂鸦来控制SD。用具有非常强的数据扩充的边界边缘来训练该模型,以模拟类似于人绘制的边界线。
40
 
41
+ ControlNet/models/control _ sd15 _ seg . PTH
42
 
43
+ -使用语义分段控制SD的ControlNet+SD1.5模型。协议是ADE20k
44
 
45
+ control net/annotator/CK pts/body _ pose _ model . PTH
46
 
47
+ -第三方模型:Openpose的姿态检测模型。
48
 
49
+ control net/annotator/CK pts/hand _ pose _ model . PTH
50
 
51
+ -第三方模型:Openpose的手部检测模型。
52
 
53
+ control net/annotator/CK pts/DPT _ hybrid-MIDAS-501 f0c 75 . pt
54
 
55
+ -第三方模型:Midas深度估计模型。
56
 
57
+ control net/annotator/CK pts/mlsd _ large _ 512 _ fp32 . PTH
58
 
59
+ -第三方模型:M-LSD检测模型。
60
 
61
+ control net/annotator/CK pts/mlsd _ tiny _ 512 _ fp32 . PTH
62
 
63
+ -第三方模型:M-LSD的另一个更小的检测模型(我们不使用这个)
64
 
65
+ control net/annotator/CK pts/network-bsds 500 . PTH
66
 
67
+ -第三方模型:HED边界检测。
68
 
69
+ control net/annotator/CK pts/upernet _ global _ small . PTH
70
 
71
+ -第三方模型:统一语义分割。
72
 
73
+ control net/training/fill 50k . zip
74
 
75
+ -我们培训教程的数据。
76
 
77
+ #相关资源
78
 
79
+ 特别感谢这个伟大的项目-[Mikubill' A1111 Webui插件](https://github.com/Mikubill/sd-webui-controlnet) !
80
 
81
+ 我们也感谢海斯特公司[格拉迪欧](https://github.com/gradio-app/gradio)演示在[拥抱面部空间](https://huggingface.co/spaces/hysts/ControlNet)以及超过65个模型[Colab列表](https://github.com/camenduru/controlnet-colab)!
82
 
83
+ 感谢好饭网的制作[扩散器控制网络](https://github.com/haofanwang/ControlNet-for-Diffusers)!
84
 
85
+ 我们还感谢所有制作Controlnet演示的作者,包括但不限于[fffiloni](https://huggingface.co/spaces/fffiloni/ControlNet-Video), [其他-模型](https://hugging face . co/spaces/hysts/control net-with-other-models), [因此游戏](https://github . com/automatic 1111/stable-diffusion-webui/discussions/7784), [RamAnanth1](https://huggingface.co/spaces/RamAnanth1/ControlNet),等等!
86
 
87
+ #误用、恶意使用和超范围使用
 
 
88
 
89
+ 该模型不应用于故意创建或传播给人们制造敌对或疏远环境的图像。这包括生成人们可以预见会觉得不安、痛苦或令人不快的图像;或传播历史或当前刻板印象的内容。