File size: 1,947 Bytes
4422f13 d3cecfd 4422f13 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
license: mit
---
# Diffusers API of Transparent Image Layer Diffusion using Latent Transparency
Create transparent image with Diffusers!
![corgi](result_sdxl.png)
Please check the Github repo [here](https://github.com/rootonchair/diffuser_layerdiffuse)
This is a port to Diffuser from original [SD Webui's Layer Diffusion](https://github.com/layerdiffusion/sd-forge-layerdiffuse) to extend the ability to generate transparent image with your favorite API
Paper: [Transparent Image Layer Diffusion using Latent Transparency](https://arxiv.org/abs/2402.17113)
## Quickstart
Generate transparent image with SD1.5 models. In this example, we will use [digiplay/Juggernaut_final](https://huggingface.co/digiplay/Juggernaut_final) as the base model
```python
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
import torch
from diffusers import StableDiffusionPipeline
from models import TransparentVAEDecoder
from loaders import load_lora_to_unet
if __name__ == "__main__":
model_path = hf_hub_download(
'LayerDiffusion/layerdiffusion-v1',
'layer_sd15_vae_transparent_decoder.safetensors',
)
vae_transparent_decoder = TransparentVAEDecoder.from_pretrained("digiplay/Juggernaut_final", subfolder="vae", torch_dtype=torch.float16).to("cuda")
vae_transparent_decoder.set_transparent_decoder(load_file(model_path))
pipeline = StableDiffusionPipeline.from_pretrained("digiplay/Juggernaut_final", vae=vae_transparent_decoder, torch_dtype=torch.float16, safety_checker=None).to("cuda")
model_path = hf_hub_download(
'LayerDiffusion/layerdiffusion-v1',
'layer_sd15_transparent_attn.safetensors'
)
load_lora_to_unet(pipeline.unet, model_path, frames=1)
image = pipeline(prompt="a dog sitting in room, high quality",
width=512, height=512,
num_images_per_prompt=1, return_dict=False)[0]
``` |