How to use it

  1. To get the Code:
git clone https://github.com/HimariO/llama.cpp.git
cd llama.cpp
git switch qwen2-vl
  1. nano Makefile #(to add llama-qwen2vl-cli)
diff --git a/Makefile b/Makefile
index 8a903d7e..51403be2 100644
--- a/Makefile
+++ b/Makefile
@@ -1485,6 +1485,14 @@ libllava.a: examples/llava/llava.cpp \
     $(OBJ_ALL)
     $(CXX) $(CXXFLAGS) -static -fPIC -c $< -o $@ -Wno-cast-qual
 
+llama-qwen2vl-cli: examples/llava/qwen2vl-cli.cpp \
+	examples/llava/llava.cpp \
+	examples/llava/llava.h \
+	examples/llava/clip.cpp \
+	examples/llava/clip.h \
+	$(OBJ_ALL)
+	$(CXX) $(CXXFLAGS) $< $(filter-out %.h $<,$^) -o $@ $(LDFLAGS) -Wno-cast-qual
+
 llama-llava-cli: examples/llava/llava-cli.cpp \
     examples/llava/llava.cpp \
     examples/llava/llava.h \
  1. Metal Build
cmake . -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=$(which nvcc) -DTCNN_CUDA_ARCHITECTURES=61
make -j35
  1. RUN
./bin/llama-qwen2vl-cli -m ./thomas-yanxin/Qwen2-VL-7B-GGUF/Qwen2-VL-7B-GGUF-Q4_K_M.gguf --mmproj ./thomas-yanxin/Qwen2-VL-7B-GGUF/qwen2vl-vision.gguf -p "Describe the image" --image "./thomas-yanxin/Qwen2-VL-7B-GGUF/1.png"
Downloads last month
17,490
GGUF
Model size
7.62B params
Architecture
qwen2vl

4-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for thomas-yanxin/Qwen2-VL-7B-GGUF

Base model

Qwen/Qwen2-VL-7B
Quantized
(20)
this model