Chocolatine-2-14B

DPO fine-tuning experiment of sometimesanotion/Lamarck-14B-v0.7 (14B params)
using the jpacifico/french-orca-dpo-pairs-revised rlhf dataset.
Training in French also improves the model in English
Long-context Support up to 128K tokens and can generate up to 8K tokens.

OpenLLM Leaderboard

coming soon

MT-Bench

coming soon

Usage

You can run this model using my Colab notebook

You can also run Chocolatine using the following code:

import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Limitations

The Chocolatine model series is a quick demonstration that a base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.

  • Developed by: Jonathan Pacifico, 2025
  • Model type: LLM
  • Language(s) (NLP): French, English
  • License: MIT
Downloads last month
29
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jpacifico/Chocolatine-2-14B-Instruct-v2.0b2

Quantizations
2 models

Dataset used to train jpacifico/Chocolatine-2-14B-Instruct-v2.0b2