IQ3_XS and IQ3_M missing in ollama deployment

#2
by deleted - opened
deleted

Thank you for your great work.
For some reason, in the "Use this model" > "Ollama" menu the IQ3_XS and IQ3_M quant .gguf files seem to be missing altough they are present in the files from the repo.

Is there any way to add them ? It's a great quant for 16 Gb VRAM GPU.

Thank you.

It's the best one that fits in 24GB as well, weird it's the only one missing.

Sign up or log in to comment