Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Some GGUF quants of ParasiticRogue/Merged-Vicuna-RP-Stew-34B-GGUF

See original model card for information. I generally don't go lower than 4-bit myself, but if you REALLY need one and can't get one any other way, let me know.

19G Merged-Vicuna-RP-Stew-34B.IQ4_NL.gguf

20G Merged-Vicuna-RP-Stew-34B.Q4_K_M.gguf

23G Merged-Vicuna-RP-Stew-34B.Q5_K_M.gguf

27G Merged-Vicuna-RP-Stew-34B.Q6_K.gguf

35G Merged-Vicuna-RP-Stew-34B.Q8_0.gguf

Downloads last month
41
GGUF
Model size
34.4B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .