Update README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
thumbnail: "https://staticassetbucket.s3.us-west-1.amazonaws.com/avatar_grid.png"
|
5 |
+
tags:
|
6 |
+
- dreambooth
|
7 |
+
- stable-diffusion
|
8 |
+
- stable-diffusion-diffusers
|
9 |
+
- text-to-image
|
10 |
+
---
|
11 |
+
|
12 |
+
# Naruto diffusion
|
13 |
+
|
14 |
+
__Stable Diffusion fine tuned on Avatar by [Lambda Labs](https://lambdalabs.com/).__
|
15 |
+
|
16 |
+
## About
|
17 |
+
|
18 |
+
Put in a text prompt and generate your own Avatar style image!
|
19 |
+
|
20 |
+
If you want to find out how to train your own Dreambooth Style, see this example (link lambda blog)
|
21 |
+
// ![pk1.jpg](https://staticassetbucket.s3.us-west-1.amazonaws.com/avatar_grid.png)
|
22 |
+
> descriptions?
|
23 |
+
|
24 |
+
## Usage
|
25 |
+
|
26 |
+
To run model locally:
|
27 |
+
```bash
|
28 |
+
pip install accelerate torchvision transformers>=4.21.0 ftfy tensorboard modelcards
|
29 |
+
```
|
30 |
+
|
31 |
+
```python
|
32 |
+
import torch
|
33 |
+
from diffusers import StableDiffusionPipeline
|
34 |
+
from torch import autocast
|
35 |
+
|
36 |
+
pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/dreambooth-avatar", torch_dtype=torch.float16)
|
37 |
+
pipe = pipe.to("cuda")
|
38 |
+
|
39 |
+
prompt = "Yoda, avatarart style person"
|
40 |
+
scale = 7.5
|
41 |
+
n_samples = 4
|
42 |
+
|
43 |
+
# Sometimes the nsfw checker is confused by the Naruto images, you can disable
|
44 |
+
# it at your own risk here
|
45 |
+
disable_safety = False
|
46 |
+
|
47 |
+
if disable_safety:
|
48 |
+
def null_safety(images, **kwargs):
|
49 |
+
return images, False
|
50 |
+
pipe.safety_checker = null_safety
|
51 |
+
|
52 |
+
with autocast("cuda"):
|
53 |
+
images = pipe(n_samples*[prompt], guidance_scale=scale).images
|
54 |
+
|
55 |
+
for idx, im in enumerate(images):
|
56 |
+
im.save(f"{idx:06}.png")
|
57 |
+
```
|
58 |
+
|
59 |
+
## Model description
|
60 |
+
|
61 |
+
Trained on 512x512 Avatar character images using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 30,000 step (about 1 hours, at a cost of about $2).
|
62 |
+
|
63 |
+
## Links
|
64 |
+
|
65 |
+
|
66 |
+
- [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers)
|
67 |
+
- [Model weights in Diffusers format](https://huggingface.co/lambdalabs/sd-naruto-diffusers)
|
68 |
+
- [Naruto diffusers repo](https://github.com/eolecvk/naruto-sd)
|
69 |
+
|
70 |
+
Trained by Eole Cervenka
|