GGUF quants of https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev/ made using instructions from https://github.com/city96/ComfyUI-GGUF/

Quantized using a fork (https://github.com/mhnakif/ComfyUI-GGUF/) to get Q4_0, Q5_0, Q8_0 and F16 GGUF quants which are compatible with both ComfyUI and stable-diffusion-webui-forge.

Note that Forge does not yet have support for Flux inpainting or controlnet as of 2024-11-21.

Downloads last month
7,960
GGUF
Model size
11.9B params
Architecture
flux

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .