omost-llama-3-8b-4bits is Omost's llama-3 model with 8k context length in nf4.

Downloads last month
1,875
Safetensors
Model size
4.65B params
Tensor type
BF16
F32
U8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.