Edit model card

DreamShaper XL Turbo for ONNX Runtime CUDA provider

Introduction

This repository hosts the optimized versions of DreamShaper XL Turbo to accelerate inference with ONNX Runtime CUDA execution provider.

The models are generated by Olive with command like the following:

python stable_diffusion_xl.py --provider cuda --optimize --model_id Lykon/dreamshaper-xl-turbo
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for tlwu/dreamshaper-xl-turbo-onnxruntime

Finetuned
this model

Collection including tlwu/dreamshaper-xl-turbo-onnxruntime