localAi / gallery /llava.yaml
eder0782's picture
inicio
7def60a
raw
history blame
401 Bytes
---
name: "llava"
config_file: |
backend: llama-cpp
context_size: 4096
f16: true
mmap: true
roles:
user: "USER:"
assistant: "ASSISTANT:"
system: "SYSTEM:"
template:
chat: |
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
{{.Input}}
ASSISTANT: