--- pipeline_tag: text-to-image license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 language: - en tags: - stable-diffusion - onnxruntime - onnx - text-to-image --- # Stable Diffusion 1.5 for ONNX Runtime CUDA provider ## Introduction This repository hosts the optimized onnx models of **Stable Diffusion 1.5** to accelerate inference with ONNX Runtime CUDA execution provider on Nvidia GPUs. It cannot run in other execution providers like CPU or DirectML. The models are generated by [Olive](https://github.com/microsoft/Olive/tree/main/examples/stable_diffusion) with command like the following: ``` python stable_diffusion.py --provider cuda --optimize ``` ## Model Description - **Developed by:** Stability AI - **Model type:** Diffusion-based text-to-image generative model - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) - **Model Description:** This is a conversion of the [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.