stfotso's picture
Update README.md
c9a6d05 verified
---
library_name: transformers
tags:
- Deepseek
- Ghomala
- Français
- Bandjoun
- Cameroun
license: llama3.2
datasets:
- stfotso/french-ghomala-bandjoun
- stfotso/granite-french-ghomala-bandjoun
language:
- fr
- bbj
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# Model Card for Model ID
Translates sentences from French to Ghomala, native language of Bandjoun, a cameroonian village.
Example:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
MAX_TOKENS = 256
tokenizer = AutoTokenizer.from_pretrained("stfotso/llama-3.2-tuned-french-ghomala-bandjoun-1B")
model = AutoModelForCausalLM.from_pretrained("stfotso/llama-3.2-tuned-french-ghomala-bandjoun-1B")
test_sentence = "bonjour Adam"
print(test_sentence)
system_prompt = """
1. You are a helpful specialist in linguistic, especially african language and you are required to provide the rightfull translation of a french expression into the ghomala language, the native language of bandjoun, a village of Cameroon.
2. Your ghomala translation should use correct phonetic signs.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Sentence (in french): <s>vieil homme</s>"},
{"role": "assistant", "content": "Sentence (in ghomala): <s>bvo</s>"},
{"role": "user", "content": f"Sentence (in french): <s>{test_sentence}</s>"}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_length=MAX_TOKENS, tokenizer=tokenizer, do_sample=True, temperature=0.5, top_p=1, top_k=50, stop_strings=["Sentence (in french)", "</s>"], pad_token_id=tokenizer.eos_token_id)
generated_text = tokenizer.batch_decode(outputs[:, inputs.shape[1]:])[0]
print(f'generated text: {generated_text}')
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Steve TUENO
- **License:** llama3.2
- **Finetuned from model:** meta-llama/Llama-3.2-1B