Edit model card

GGUF importance matrix (imatrix) quants for https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B

Layers Context Template
62
16384
<s>system
{instructions}
<s>human
{prompt}
<s>bot
{response}<|end▁of▁sentence|>
Downloads last month
19
GGUF
Model size
33.3B params
Architecture
llama

3-bit

4-bit

8-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.