---
base_model: jpacifico/Chocolatine-2-14B-Instruct-v2.0.3
datasets:
- jpacifico/french-orca-dpo-pairs-revised
language:
- fr
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- french
- chocolatine
- llama-cpp
---
# Chocolatine-2-14B-Instruct-v2.0.3-Q8_0-GGUF
Quantized q8_0 GGUF version of the original model [`Chocolatine-2-14B-Instruct-v2.0.3`](https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-v2.0.3)
can be used on a CPU device, compatible [llama.cpp](https://github.com/ggerganov/llama.cpp)
Supported architecture by [LM Studio](https://lmstudio.ai/).
### Ollama
Previously install [Ollama](https://ollama.com/).
Usage:
```bash
ollama create chocolatine-2 -f Modelfile_chocolatine-2-q8
ollama run chocolatine-2
```
Ollama *Modelfile* example :
```bash
FROM ./chocolatine-2-14b-instruct-v2.0.3-q8_0.gguf
TEMPLATE """
{{- if .Suffix }}<|fim_prefix|>{{ .Prompt }}<|fim_suffix|>{{ .Suffix }}<|fim_middle|>
{{- else if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within XML tags:
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
For each function call, return a json object with function name and arguments within XML tags:
{"name": , "arguments": }
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
{{ .Content }}
<|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}
"""
SYSTEM """Tu es Chocolatine, un assistant IA serviable et bienveillant. Tu fais des réponses concises et précises."""
```
### Limitations
The Chocolatine-2 model series is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.
- **Developed by:** Jonathan Pacifico, 2025
- **Model type:** LLM
- **Language(s) (NLP):** French, English
- **License:** Apache-2.0
Made with ❤️ in France