GGUF
Collection
Your favorite models quantized in gguf formats
•
7 items
•
Updated
GGUF quantized versions of HuggingFaceTB/SmolLM2-1.7B-Instruct
Q2_K
: SmolLM2-1.7B-Instruct-Q2_K.ggufQ3_K_S
: SmolLM2-1.7B-Instruct-Q3_K_S.ggufQ3_K_M
: SmolLM2-1.7B-Instruct-Q3_K_M.ggufQ3_K_L
: SmolLM2-1.7B-Instruct-Q3_K_L.ggufQ4_0
: SmolLM2-1.7B-Instruct-Q4_0.ggufQ4_K_S
: SmolLM2-1.7B-Instruct-Q4_K_S.ggufQ4_K_M
: SmolLM2-1.7B-Instruct-Q4_K_M.ggufQ5_0
: SmolLM2-1.7B-Instruct-Q5_0.ggufQ5_K_S
: SmolLM2-1.7B-Instruct-Q5_K_S.ggufQ5_K_M
: SmolLM2-1.7B-Instruct-Q5_K_M.ggufQ6_K
: SmolLM2-1.7B-Instruct-Q6_K.ggufQ8_0
: SmolLM2-1.7B-Instruct-Q8_0.ggufIQ3_M_IMAT
: SmolLM2-1.7B-Instruct-IQ3_M_imat.ggufIQ3_XXS_IMAT
: SmolLM2-1.7B-Instruct-IQ3_XXS_imat.ggufQ4_K_M_IMAT
: SmolLM2-1.7B-Instruct-Q4_K_M_imat.ggufQ4_K_S_IMAT
: SmolLM2-1.7B-Instruct-Q4_K_S_imat.ggufIQ4_NL_IMAT
: SmolLM2-1.7B-Instruct-IQ4_NL_imat.ggufIQ4_XS_IMAT
: SmolLM2-1.7B-Instruct-IQ4_XS_imat.ggufQ5_K_M_IMAT
: SmolLM2-1.7B-Instruct-Q5_K_M_imat.ggufQ5_K_S_IMAT
: SmolLM2-1.7B-Instruct-Q5_K_S_imat.gguf# CLI:
llama-cli --hf-repo medmekk/SmolLM2-1.7B-Instruct.GGUF --hf-file MODEL_FILE -p "Your prompt"
# Server:
llama-server --hf-repo medmekk/SmolLM2-1.7B-Instruct.GGUF --hf-file MODEL_FILE -c 2048