GGUF Q8_0 quant

#8
by SporkySporkness - opened

I have been trying to quantize the model to Q8_0, because the base Flux.1 dev Q8_0 works very well, giving nearly identical results to fp16.
However, I have never quantized before, and I did not succeed with AWPortrait-FL. Could you upload a quantized version?
Thank you so much

Shakker Labs org

Let's take a look.

Shakker Labs org

Here is a tutorial how to convert to GGUF verison.

https://github.com/city96/ComfyUI-GGUF/tree/main/tools

I've tried ComfyUI-GGUF and stable-diffusion.cpp, but am still unable to make it to the end :(

Update: I've finally managed to get it working!
GGUF quants available at https://huggingface.co/SporkySporkness/AWPortrait-FL-GGUF/

Shakker Labs org

Thank you!

Sign up or log in to comment