Edit model card

Gemma 2 2B quantized for wllama (under 2gb).

q4_0_4_8 is WAY faster when using llama.cpp, with wllama, it's about the same as q4_k.

Downloads last month
68
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

4-bit

5-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Fishfishfishfishfish/Gemma-2-2B_wllama_gguf

Base model

google/gemma-2-2b
Quantized
(103)
this model