Gemma 2 Swahili
Collection
Gemma 2 Swahili is a family of lightweight, state-of-the-art Swahili variants of Gemma 2 models.
•
4 items
•
Updated
Gemma2-27B-Swahili-IT is a state-of-the-art open variant of Google's Gemma2-27B-IT model, fine-tuned for natural Swahili language understanding and generation. This model utilizes Quantized Low-Rank Adaptation (QLoRA) to achieve efficient fine-tuning while maintaining performance.
The model was fine-tuned on a comprehensive dataset containing:
This model is designed for:
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
# Configure 4-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True
)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("alfaxadeyembe/gemma2-27b-swahili-it")
model = AutoModelForCausalLM.from_pretrained(
"alfaxadeyembe/gemma2-27b-swahili-it",
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.bfloat16
)
# Always set to eval mode for inference
model.eval()
# Example usage
prompt = "Eleza dhana ya uchumi wa kidijitali na umuhimu wake katika ulimwengu wa leo."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=500,
do_sample=True,
temperature=0.7,
top_p=0.95
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
@misc{gemma2-27b-swahili-it,
author = {Alfaxad Eyembe},
title = {Gemma2-27B-Swahili-IT: Swahili Variation of Gemma2-27b-it Model},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
}
For questions or feedback, please reach out through: