clementchadebec
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -54,6 +54,42 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
|
|
54 |
<img style="width:400px;" src="images/raccoon.png">
|
55 |
</p>
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
# Training Details
|
58 |
The model was trained for 20k iterations on 4 H100 GPUs (representing approximately a total of 176 GPU hours of training). Please refer to the [paper](http://arxiv.org/abs/2406.02347) for further parameters details.
|
59 |
|
|
|
54 |
<img style="width:400px;" src="images/raccoon.png">
|
55 |
</p>
|
56 |
|
57 |
+
# Combining Flash Diffusion with Existing LoRAs 🎨
|
58 |
+
|
59 |
+
FlashSDXL can also be combined with existing LoRAs to unlock few steps generation in a **training free** manner. It can be integrated straight to Hugging Face pipelines. See an example below.
|
60 |
+
|
61 |
+
```python
|
62 |
+
from diffusers import DiffusionPipeline, LCMScheduler
|
63 |
+
import torch
|
64 |
+
|
65 |
+
flash_lora_id = "jasperai/flash-sdxl"
|
66 |
+
|
67 |
+
# Load Pipeline
|
68 |
+
pipe = DiffusionPipeline.from_pretrained(
|
69 |
+
"stabilityai/stable-diffusion-xl-base-1.0",
|
70 |
+
variant="fp16"
|
71 |
+
)
|
72 |
+
|
73 |
+
# Set scheduler
|
74 |
+
pipe.scheduler = LCMScheduler.from_config(
|
75 |
+
pipe.scheduler.config
|
76 |
+
)
|
77 |
+
|
78 |
+
# Load LoRAs
|
79 |
+
pipe.load_lora_weights(flash_lora_id, adapter_name="flash")
|
80 |
+
pipe.load_lora_weights("TheLastBen/Papercut_SDXL", adapter_name="paper")
|
81 |
+
|
82 |
+
pipe.set_adapters(["flash", "paper"], adapter_weights=[1.0, 1.0])
|
83 |
+
pipe.to(device="cuda", dtype=torch.float16)
|
84 |
+
|
85 |
+
prompt = "papercut, a cute corgi"
|
86 |
+
|
87 |
+
image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
|
88 |
+
```
|
89 |
+
<p align="center">
|
90 |
+
<img style="width:400px;" src="images/corgi.jpg">
|
91 |
+
</p>
|
92 |
+
|
93 |
# Training Details
|
94 |
The model was trained for 20k iterations on 4 H100 GPUs (representing approximately a total of 176 GPU hours of training). Please refer to the [paper](http://arxiv.org/abs/2406.02347) for further parameters details.
|
95 |
|