garten2-7b-4bit-mlx / README.md
mc0ps's picture
Upload folder using huggingface_hub
1613140 verified
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- qlora
- dto
- mlx
base_model:
- mistralai/Mistral-7B-v0.1
---
# mlx-community/garten2-7b-4bit-mlx
This model was converted to MLX format from [`senseable/garten2-7b`]().
Refer to the [original model card](https://huggingface.co/senseable/garten2-7b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/garten2-7b-4bit-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```