same license as sdxl-turbo
Browse files
README.md
CHANGED
@@ -1,5 +1,8 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
3 |
base_model: stabilityai/sdxl-turbo
|
4 |
language:
|
5 |
- en
|
@@ -11,7 +14,6 @@ tags:
|
|
11 |
- text-to-image
|
12 |
---
|
13 |
|
14 |
-
|
15 |
# Stable Diffusion XL Turbo for ONNX Runtime
|
16 |
|
17 |
## Introduction
|
@@ -24,7 +26,7 @@ See the [usage instructions](#usage-example) for how to run the SDXL pipeline wi
|
|
24 |
|
25 |
- **Developed by:** Stability AI
|
26 |
- **Model type:** Diffusion-based text-to-image generative model
|
27 |
-
- **License:** [
|
28 |
- **Model Description:** This is a conversion of the [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
|
29 |
|
30 |
The VAE decoder is converted from [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). There are slight discrepancies between its output and that of the original VAE, but the decoded images should be [close enough for most purposes](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7#64c5c0f8e2e5c94bd04eaa80).
|
@@ -72,17 +74,17 @@ git clone https://github.com/microsoft/onnxruntime
|
|
72 |
cd onnxruntime
|
73 |
```
|
74 |
|
75 |
-
If you want to try canny control net,
|
76 |
-
```shell
|
77 |
-
git checkout canny_control_net
|
78 |
-
```
|
79 |
-
|
80 |
2. Download the SDXL ONNX files from this repo
|
81 |
```shell
|
82 |
git lfs install
|
83 |
git clone https://huggingface.co/tlwu/sdxl-turbo-onnxruntime
|
84 |
```
|
85 |
|
|
|
|
|
|
|
|
|
|
|
86 |
3. Launch the docker
|
87 |
```shell
|
88 |
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
|
|
|
1 |
---
|
2 |
+
pipeline_tag: text-to-image
|
3 |
+
license: other
|
4 |
+
license_name: sai-nc-community
|
5 |
+
license_link: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT
|
6 |
base_model: stabilityai/sdxl-turbo
|
7 |
language:
|
8 |
- en
|
|
|
14 |
- text-to-image
|
15 |
---
|
16 |
|
|
|
17 |
# Stable Diffusion XL Turbo for ONNX Runtime
|
18 |
|
19 |
## Introduction
|
|
|
26 |
|
27 |
- **Developed by:** Stability AI
|
28 |
- **Model type:** Diffusion-based text-to-image generative model
|
29 |
+
- **License:** [STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE](https://huggingface.co/stabilityai/sd-turbo/blob/main/LICENSE)
|
30 |
- **Model Description:** This is a conversion of the [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
|
31 |
|
32 |
The VAE decoder is converted from [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). There are slight discrepancies between its output and that of the original VAE, but the decoded images should be [close enough for most purposes](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7#64c5c0f8e2e5c94bd04eaa80).
|
|
|
74 |
cd onnxruntime
|
75 |
```
|
76 |
|
|
|
|
|
|
|
|
|
|
|
77 |
2. Download the SDXL ONNX files from this repo
|
78 |
```shell
|
79 |
git lfs install
|
80 |
git clone https://huggingface.co/tlwu/sdxl-turbo-onnxruntime
|
81 |
```
|
82 |
|
83 |
+
If you want to try canny control net, get model from a branch:
|
84 |
+
```shell
|
85 |
+
git checkout canny_control_net
|
86 |
+
```
|
87 |
+
|
88 |
3. Launch the docker
|
89 |
```shell
|
90 |
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
|