Linaqruf commited on
Commit
d9dfb8f
·
1 Parent(s): c881db0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: creativeml-openrail-m
 
3
  language:
4
  - en
5
  pipeline_tag: text-to-image
@@ -24,11 +25,11 @@ library_name: diffusers
24
 
25
  # Anything V3.1
26
 
27
- ![Anime Girl](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example_images/thumbnail.png)
28
 
29
- Anything V3.1, A third party continuation of a latent diffusion model, Anything V3.0. This model is claimed to be a better version of Anything V3.0 with fixed VAE model and fixed CLIP position id key, CLIP reference taken from Stable Diffusion V1.5. VAE Swapped using Kohya's `merge-vae` script and CLIP fixed using Arena's `stable-diffusion-model-toolkit` webui extensions.
30
 
31
- Anything V3.2, supposed to be a resume training of Anything V3.1. The current model is fine-tuned with a learning rate of `2.0e-6`, 50 epochs and 4 batch sizes on the datasets collected from many sources, with 1/4 of them are synthetic dataset. Dataset has been preprocessed using [Aspect Ratio Bucketing Tool](https://github.com/NovelAI/novelai-aspect-ratio-bucketing) so that it can be converted to latents and trained at non-square resolutions. This model supposed to be a test model to see how clip fix affect training. Like other anime-style Stable Diffusion models, it also supports Danbooru tags to generate images.
32
 
33
  e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
34
 
@@ -98,9 +99,9 @@ image.save("anime_girl.png")
98
 
99
  Here is some cherrypicked samples and comparison between available models
100
 
101
- ![Anime Girl](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example_images/1girl.png)
102
- ![Anime Boy](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example_images/1boy.png)
103
- ![Aesthetic](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example_images/aesthetic.png)
104
 
105
  ## License
106
 
 
1
  ---
2
  license: creativeml-openrail-m
3
+ thumbnail: "https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example-images/thumbnail.png"
4
  language:
5
  - en
6
  pipeline_tag: text-to-image
 
25
 
26
  # Anything V3.1
27
 
28
+ ![Anime Girl](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example-images/thumbnail.png)
29
 
30
+ Anything V3.1 is a third-party continuation of a latent diffusion model, Anything V3.0. This model is claimed to be a better version of Anything V3.0 with a fixed VAE model and a fixed CLIP position id key. The CLIP reference was taken from Stable Diffusion V1.5. The VAE was swapped using Kohya's merge-vae script and the CLIP was fixed using Arena's stable-diffusion-model-toolkit webui extensions.
31
 
32
+ Anything V3.2 is supposed to be a resume training of Anything V3.1. The current model has been fine-tuned with a learning rate of 2.0e-6, 50 epochs, and 4 batch sizes on datasets collected from many sources, with 1/4 of them being synthetic datasets. The dataset has been preprocessed using the Aspect Ratio Bucketing Tool so that it can be converted to latents and trained at non-square resolutions. This model is supposed to be a test model to see how the clip fix affects training. Like other anime-style Stable Diffusion models, it also supports Danbooru tags to generate images.
33
 
34
  e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_**
35
 
 
99
 
100
  Here is some cherrypicked samples and comparison between available models
101
 
102
+ ![Anime Girl](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example-images/1girl.png)
103
+ ![Anime Boy](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example-images/1boy.png)
104
+ ![Aesthetic](https://huggingface.co/Linaqruf/anything-v3-1/resolve/main/example-images/aesthetic.png)
105
 
106
  ## License
107