dreambooth-avatar / README.md
eolecvk's picture
Update README.md
f43e010
|
raw
history blame
1.92 kB
---
language:
- en
thumbnail: "https://staticassetbucket.s3.us-west-1.amazonaws.com/avatar_grid.png"
tags:
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Dreambooth style: Avatar
__Dreambooth finetuning of Stable Diffusion (v1.5.1) on Avatar art style by [Lambda Labs](https://lambdalabs.com/).__
## About
Put in a text prompt and generate your own Avatar style image!
If you want to find out how to train your own Dreambooth Style, see this example (link lambda blog)
// ![pk1.jpg](https://staticassetbucket.s3.us-west-1.amazonaws.com/avatar_grid.png)
> descriptions?
## Usage
To run model locally:
```bash
pip install accelerate torchvision transformers>=4.21.0 ftfy tensorboard modelcards
```
```python
import torch
from diffusers import StableDiffusionPipeline
from torch import autocast
pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/dreambooth-avatar", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Yoda, avatarart style person"
scale = 7.5
n_samples = 4
# Sometimes the nsfw checker is confused by the Naruto images, you can disable
# it at your own risk here
disable_safety = False
if disable_safety:
def null_safety(images, **kwargs):
return images, False
pipe.safety_checker = null_safety
with autocast("cuda"):
images = pipe(n_samples*[prompt], guidance_scale=scale).images
for idx, im in enumerate(images):
im.save(f"{idx:06}.png")
```
## Model description
Trained on 512x512 Avatar character images using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 30,000 step (about 1 hours, at a cost of about $2).
## Links
- [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers)
- [Model weights in Diffusers format](https://huggingface.co/lambdalabs/sd-naruto-diffusers)
- [Naruto diffusers repo](https://github.com/eolecvk/naruto-sd)
Trained by Eole Cervenka