How to use it with llama.cpp http server ?

#7
by XavierCorbier - opened

Hello,

When I use it with the llama.cpp HTTP server, I get a completely different description of the image.
I think the LLM isn't finding the image in base64 format.

My command:
./llama-server -m ../../../../models/llava-llama-3-8b-v1_1-int4.gguf --port 8080 --mmproj ../../../../models/llava-llama-3-8b-v1_1-mmproj-f16.gguf -c 4096

My code :
response = client.chat.completions.create(
model="",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image"},
{
"type": "image_url",
"image_url": {
"url": f"{image_data}",
},
},
],
}
],
stream=True,
)

Does anyone have this problem?

XavierCorbier changed discussion title from How to use it with llama.cpp http server to How to use it with llama.cpp http server ?

Sign up or log in to comment