GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx

This quantized low-bit model GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx was converted to MLX format from GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0 using gbx-lm version 0.3.5. Refer to the original model card for more details on the model.

Use with mlx

pip install gbx-lm
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Downloads last month
64
Safetensors
Model size
792M params
Tensor type
FP16
I16
U32
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model鈥檚 pipeline type.

Model tree for GreenBitAI/Llama-3.2-3B-Instruct-layer-mix-bpw-4.0-mlx

Finetuned
(1)
this model