|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- HuggingFaceM4/the_cauldron |
|
- HuggingFaceM4/Docmatix |
|
pipeline_tag: image-text-to-text |
|
language: |
|
- en |
|
base_model: |
|
- HuggingFaceTB/SmolVLM-256M-Instruct |
|
base_model_relation: quantized |
|
tags: |
|
- mlx |
|
--- |
|
|
|
# moot20/SmolVLM-256M-Instruct-MLX-8bits |
|
This model was converted to MLX format from [`HuggingFaceTB/SmolVLM-256M-Instruct`]() using mlx-vlm version **0.1.12**. |
|
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) for more details on the model. |
|
## Use with mlx |
|
|
|
```bash |
|
pip install -U mlx-vlm |
|
``` |
|
|
|
```bash |
|
python -m mlx_vlm.generate --model moot20/SmolVLM-256M-Instruct-MLX-8bits --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> |
|
``` |
|
|