---
inference: false
license: apache-2.0
---
# Model Card
📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [3B Demo](https://wisemodel.cn/spaces/baai/Bunny) | 🐰 [8B Demo](https://2e09fec5116a0ba343.gradio.live)
This is **GGUF** format of [Bunny-Llama-3-8B-V](https://huggingface.co/BAAI/Bunny-Llama-3-8B-V).
Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source.
We provide Bunny-Llama-3-8B-V, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny).
![comparison](comparison.png)
# Quickstart
## Chay by [`llama.cpp`](https://github.com/ggerganov/llama.cpp)
```shell
# sample images can be found in images folder
# fp16
./llava-cli -m ggml-model-f16.gguf --mmproj mmproj-model-f16.gguf --image example_2.png -c 4096 -p "Why is the image funny?" --temp 0.0
# int4
./llava-cli -m ggml-model-Q4_K_M.gguf --mmproj mmproj-model-f16.gguf --image example_2.png -c 4096 -p "Why is the image funny?" --temp 0.0
```
## Chat by [ollama](https://ollama.com/)
```shell
# sample images can be found in images folder
# fp16
ollama create Bunny-Llama-3-8B-V-fp16 -f ./ollama-f16
ollama run Bunny-Llama-3-8B-V-fp16 "example_2.png Why is the image funny?"
# int4
ollama create Bunny-Llama-3-8B-V-int4 -f ./ollama-Q4_K_M
ollama run Bunny-Llama-3-8B-V-int4 "example_2.png Why is the image funny?"
```