File size: 5,066 Bytes
7791147 9c2ebfa 7791147 bd5c559 5bda548 61ab4dd 98ed851 bd5c559 7d6b06b 61ab4dd 830679d 7d6b06b 830679d 61ab4dd a662231 61ab4dd a662231 61ab4dd a662231 61ab4dd f729977 98ed851 f729977 98ed851 f729977 98ed851 f729977 98ed851 f729977 98ed851 f729977 98ed851 61ab4dd aec695a 9d864f8 9a01103 be7dd21 9a01103 42a30c9 0154828 50e4188 0154828 50e4188 0154828 044a8f1 278e56a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
---
license: cc-by-nc-sa-4.0
---
Pre-trained models and output samples of ControlNet-LLLite form bdsqlsz
Inference with ComfyUI: https://github.com/kohya-ss/ControlNet-LLLite-ComfyUI
For 1111's Web UI, [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) extension supports ControlNet-LLLite.
Training: https://github.com/kohya-ss/sd-scripts/blob/sdxl/docs/train_lllite_README.md
The recommended preprocessing for the animeface model is [Anime-Face-Segmentation](https://github.com/siyeong0/Anime-Face-Segmentation)
# Models
## Trained on anime model
AnimeFaceSegment、Normal、T2i-Color/Shuffle、lineart_anime_denoise、recolor_luminance
Base Model use[Kohaku-XL](https://civitai.com/models/136389?modelVersionId=150441)
MLSD
Base Model use[ProtoVision XL - High Fidelity 3D](https://civitai.com/models/125703?modelVersionId=144229)
# Samples
## AnimeFaceSegmentV1
![source 1](./sample/00000-1254802172.png) ![sample 1-1](./sample/00153-1415397694.png)
![sample 1-2](./sample/00155-541628598.png) ![sample 1-3](./sample/00156-3563138011.png)
![source 2](./sample/00013-1254802185.png) ![sample 2-1](./sample/00157-172216875.png)
![sample 2-2](./sample/00161-125697048.png) ![sample 2-3](./sample/00163-3802019239.png)
## AnimeFaceSegmentV2
![source 1](./sample/00015-882327104.png)
![sample 1](./sample/grid-0000-656896882.png)
![source 2](./sample/00081-882327170.png)
![sample 2](./sample/grid-0000-2857388239.png)
## MLSDV2
![source 1](./sample/0-73.png)
![preprocess 1](./sample/mlsd-0000.png)
![sample 1](./sample/grid-0001-496872924.png)
![source 2](./sample/0-151.png)
![preprocess 2](./sample/mlsd-0001.png)
![sample 2](./sample/grid-0002-906633402.png)
## Normal
![source 1](./sample/test.png)
![preprocess 1](./sample/normal_bae-0004.png)
![sample 1](./sample/grid-0007-2668683255.png)
![source 2](./sample/zelda_rgba.png)
![preprocess 2](./sample/normal_bae-0005.png)
![sample 2](./sample/grid-0008-2191923130.png)
## T2i-Color/Shuffle
![source 1](./sample/sample_0_525_c9a3a20fa609fe4bbf04.png)
![preprocess 1](./sample/color-0008.png)
![sample 1](./sample/grid-0017-751452001.jpg)
![source 2](./sample/F8LQ75WXoAETQg3.jpg)
![preprocess 2](./sample/color-0009.png)
![sample 2](./sample/grid-0018-2976518185.jpg)
## Lineart_Anime_Denoise
![source 1](./sample/20230826131545.png)
![preprocess 1](./sample/lineart_anime_denoise-1308.png)
![sample 1](./sample/grid-0028-1461058306.png)
![source 2](./sample/Snipaste_2023-08-10_23-33-53.png)
![preprocess 2](./sample/lineart_anime_denoise-1309.png)
![sample 2](./sample/grid-0030-1612754720.png)
## Recolor_Luminance
![source 1](./sample/F8LQ75WXoAETQg3.jpg)
![preprocess 1](./sample/recolor_luminance-0014.png)
![sample 1](./sample/grid-0060-2359545755.png)
![source 2](./sample/Snipaste_2023-08-15_02-38-05.png)
![preprocess 2](./sample/recolor_luminance-0016.png)
![sample 2](./sample/grid-0061-448628292.png)
## Canny
![source 1](./sample/Snipaste_2023-08-10_23-33-53.png)
![preprocess 1](./sample/canny-0034.png)
![sample 1](./sample/grid-0100-2599077425.png)
![source 2](./sample/00021-210474367.jpeg)
![preprocess 2](./sample/canny-0021.png)
![sample 2](./sample/grid-0084-938772089.png)
## DW_OpenPose
![preprocess 1](./sample/dw_openpose_full-0015.png)
![sample 1](./sample/grid-0015-4163265662.png)
![preprocess 2](./sample/dw_openpose_full-0030.png)
![sample 2](./sample/grid-0030-2839828192.png)
## Tile_Anime
![source 1](./sample/03476-424776255.png)
![sample 1](./sample/grid-0008-3461355229.png)
![sample 2](./sample/grid-0015-4163265662.png)
![sample 3](./sample/00094-188618111.png)
和其他模型不同,我需要简单解释一下tile模型的用法。
总的来说,tile模型有三个用法,
1、不输入任何提示词,它可以直接还原参考图的大致效果,然后略微重新修改局部细节,可以用于V2V。(图2)
2、权重设定为0.55~0.75,它可以保持原本构图和姿势的基础上,接受提示词和LoRA的修改。(图3)
3、使用配合放大效果,对每个tiling进行细节增加的同时保持一致性。(图4)
因为训练时使用的数据集为动漫模型,所以目前对真实摄影风格的重绘效果并不好,需要等待完成最终版本。
Unlike other models, I need to briefly explain the usage of the tile model.
In general, there are three uses for the tile model,
1. Without entering any prompt words, it can directly restore the approximate effect of the reference image and then slightly modify local details. It can be used for V2V (Figure 2).
2. With a weight setting of 0.55~0.75, it can maintain the original composition and pose while accepting modifications from prompt words and LoRA (Figure 3).
3. Use in conjunction with magnification effects to increase detail for each tiling while maintaining consistency (Figure 4).
Since the dataset used during training is an anime model, currently, its repainting effect on real photography styles is not good; we will have to wait until completing its final version. |