clementchadebec commited on
Commit
e69a6ee
·
verified ·
1 Parent(s): 96bb784

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,11 +12,11 @@ license: cc-by-nc-nd-4.0
12
 
13
 
14
  Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin.*
15
- This model is a **26.4M** LoRA distilled version of [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model that is able to generate images in **4 steps**. The main purpose of this model is to reproduce the main results of the paper.
16
 
17
 
18
  <p align="center">
19
- <img style="width:700px;" src="images/hf_grid.png">
20
  </p>
21
 
22
  # How to use?
@@ -53,7 +53,7 @@ image = pipe(prompt, num_inference_steps=4, guidance_scale=0).images[0]
53
  </p>
54
 
55
  # Training Details
56
- The model was trained for 20k iterations on 4 H100 GPUs (representing approximately 176 hours of training). Please refer to the [paper]() for further parameters details.
57
 
58
  **Metrics on COCO 2014 validation (Table 3)**
59
  - FID-10k: 21.62 (4 NFE)
 
12
 
13
 
14
  Flash Diffusion is a diffusion distillation method proposed in [ADD ARXIV]() *by Clément Chadebec, Onur Tasar, Eyal Benaroche, and Benjamin Aubin.*
15
+ This model is a **108M** LoRA distilled version of [SDXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model that is able to generate images in **4 steps**. The main purpose of this model is to reproduce the main results of the paper.
16
 
17
 
18
  <p align="center">
19
+ <img style="width:700px;" src="images/flash_sdxl.jpg">
20
  </p>
21
 
22
  # How to use?
 
53
  </p>
54
 
55
  # Training Details
56
+ The model was trained for 20k iterations on 4 H100 GPUs (representing approximately a total of 176 GPU hours of training). Please refer to the [paper]() for further parameters details.
57
 
58
  **Metrics on COCO 2014 validation (Table 3)**
59
  - FID-10k: 21.62 (4 NFE)