DO NOT Use Yet. It is not functioning with llama.cpp somehow with error "llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found"

Downloads last month
11
GGUF
Model size
2.51B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.