Kubanychbek Emil commited on
Commit
0021031
1 Parent(s): ef939f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -13
README.md CHANGED
@@ -14,26 +14,36 @@ inference: true
14
  Basically, it is just a converted version of [Lykon/AnyLORA][https://huggingface.co/Lykon/AnyLoRA/tree/main]
15
  This model was created by **[Lykon](https://civitai.com/user/Lykon)** from Civitai. All credits for him. Thanks for creating this wonderful model.
16
 
17
- ### Description from original author:
18
- I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective conpared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1.
19
-
20
- This is **good for inference** (again, especially with styles) even if I made it mainly for training. It ended up being **super good for generating pics and it's now my go-to anime model**. It also eats very little vram.
21
-
22
- The first version I'm uploading is a fp16-pruned with no baked vae, which is less than 2 GB, meaning you can get up to 6 epochs in the same batch on a colab.
23
-
24
- Just make sure you use CLIP skip 2 and booru style tags when training. Remember to use a good vae when generating, or images wil look desaturated. I suggest WD Vae or FT MSE. Or you can use the baked vae version.
25
-
26
-
27
-
28
  Examples | Examples | Examples
29
  ---- | ---- | ----
30
  ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5c162e30-f848-41da-b746-c51ccbf0e700/width=400/337388) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1d7cb65e-b723-4792-a71b-baa445ac3400/width=400/337386) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cee6944f-fc61-462f-32d3-5480e197c600/width=400/337385)
31
  ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ccce9da5-9077-4f75-8b5c-22fd9bddef00/width=400/337383) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/41dc9f97-b60d-47b3-b31e-bc32fc3a0e00/width=400/337382) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/de78e3e2-ab32-4e2d-3539-a85aa1b2d200/width=400/337381)
32
- ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/330513d8-1759-4715-391a-e2a94aa2f700/width=400/337379) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/44b0d532-3103-45af-ae26-2b896cd37000/width=400/337378) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8af7cb06-5bba-41e7-8f6d-aab930d02c00/width=400/337376)
33
 
 
 
34
 
35
- -------
 
 
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ## 🧨 Diffusers
39
 
 
14
  Basically, it is just a converted version of [Lykon/AnyLORA][https://huggingface.co/Lykon/AnyLoRA/tree/main]
15
  This model was created by **[Lykon](https://civitai.com/user/Lykon)** from Civitai. All credits for him. Thanks for creating this wonderful model.
16
 
 
 
 
 
 
 
 
 
 
 
 
17
  Examples | Examples | Examples
18
  ---- | ---- | ----
19
  ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5c162e30-f848-41da-b746-c51ccbf0e700/width=400/337388) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1d7cb65e-b723-4792-a71b-baa445ac3400/width=400/337386) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cee6944f-fc61-462f-32d3-5480e197c600/width=400/337385)
20
  ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ccce9da5-9077-4f75-8b5c-22fd9bddef00/width=400/337383) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/41dc9f97-b60d-47b3-b31e-bc32fc3a0e00/width=400/337382) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/de78e3e2-ab32-4e2d-3539-a85aa1b2d200/width=400/337381)
21
+ -------
22
 
23
+ ### Description from original author:
24
+ I made this model to ensure my future LoRA training is compatible with newer models, plus to get a model with a style neutral enough to get accurate styles with any style LoRA. Training on this model is much more effective conpared to NAI, so at the end you might want to adjust the weight or offset (I suspect that's because NAI is now much diluted in newer models). I usually find good results at 0.65 weigth that I later offset to 1.
25
 
26
+ This is **good for inference** (again, especially with styles) even if I made it mainly for training. It ended up being **super good for generating pics and it's now my go-to anime model**. It also eats very little vram.
27
+
28
+ The first version I'm uploading is a fp16-pruned with no baked vae, which is less than 2 GB, meaning you can get up to 6 epochs in the same batch on a colab.
29
 
30
+ Just make sure you use CLIP skip two and booru style tags when training. Remember to use a good VAE when generating, or images will look desaturated. I suggest WD Vae or FT MSE. Or you can use the baked vae version.
31
+
32
+ ### My Personal Opinion:
33
+ It is **the best anime diffusion model I have seen so far**. You need to try it; it produces ultra-realistic images and is highly compatible with LORA's. Thanks a lot Lykon, your model is great!
34
+ Just compare these two images, and you can instantaneously say the difference in quality:
35
+ **AnyLORA Model** | AnythingV4 Model
36
+ ---- | ----
37
+ ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b9387484-6f0c-4bb3-b2db-35969bc02900/width=600/358259) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/257ab024-a77b-41e6-2f0d-036bc0de4d00/width=600/358258)
38
+ ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/fc460959-b641-4e07-5da6-e07080c9ac00/width=600/358256) | ![](https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f478f0fc-488c-45ad-05d4-59f9c6641000/width=600/358257)
39
+
40
+ Prompt: makima \(chainsaw man\), best quality, ultra detailed, 1girl, solo, victory hand sign, standing, red hair, long braided hair, bright eyes, bangs, medium breasts, white shirt, necktie, stare, smile, (evil:1.2), looking at viewer, (interview:1.3), (dark background, chains:1.3)
41
+ Negative Prompt: (worst quality, low quality:1.4), border, frame, (large breasts:1.4), watermark, signature
42
+ Guidance Scale = 7, 9
43
+ Width, Height = 600, 800
44
+ Steps = 25
45
+ Seed = 77777
46
+ LORA weight: 0.6
47
 
48
  ## 🧨 Diffusers
49