bean980310
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,7 @@ from diffusers import StableDiffusionPipeline
|
|
46 |
import torch
|
47 |
|
48 |
pipe = StableDiffusionPipeline.from_pretrained(
|
49 |
-
"
|
50 |
torch_dtype=torch.float16
|
51 |
)
|
52 |
pipe = pipe.to("cuda")
|
@@ -170,8 +170,8 @@ Currently six Stable Diffusion checkpoints are provided, which were trained as f
|
|
170 |
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
171 |
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
172 |
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
173 |
-
- [`stable-diffusion-v1-5`](https://huggingface.co/
|
174 |
-
- [`stable-diffusion-v1-5-inpainting`](https://huggingface.co/
|
175 |
|
176 |
- **Hardware:** 32 x 8 x A100 GPUs
|
177 |
- **Optimizer:** AdamW
|
|
|
46 |
import torch
|
47 |
|
48 |
pipe = StableDiffusionPipeline.from_pretrained(
|
49 |
+
"runwayml/stable-diffusion-v1-5",
|
50 |
torch_dtype=torch.float16
|
51 |
)
|
52 |
pipe = pipe.to("cuda")
|
|
|
170 |
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
171 |
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
172 |
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
173 |
+
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
174 |
+
- [`stable-diffusion-v1-5-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
|
175 |
|
176 |
- **Hardware:** 32 x 8 x A100 GPUs
|
177 |
- **Optimizer:** AdamW
|