--- license: apache-2.0 tags: - mlx --- # GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.5-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.5`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.5) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/Qwen-1.5-14B-Chat-layer-mix-bpw-2.5-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```