MARS
MARS is the first iteration of Curiosity Technology models, based on Llama 3 8B.
We have trained MARS on in-house Turkish dataset, as well as several open-source datasets and their Turkish translations. It is our intention to release Turkish translations in near future for community to have their go on them.
MARS have been trained for 3 days on 4xA100.
Model Details
- Base Model: Meta Llama 3 8B Instruct
- Training Dataset: In-house & Translated Open Source Turkish Datasets
- Training Method: LoRA Fine Tuning
How to use
You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate()
function. Let's see examples of both.
Transformers pipeline
import transformers
import torch
model_id = "curiositytech/MARS"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen korsan gibi konuÅŸan bir korsan chatbotsun!"},
{"role": "user", "content": "Sen kimsin?"},
]
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][-1])
Transformers AutoModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "curiositytech/MARS"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Sen korsan gibi konuÅŸan bir korsan chatbotsun!"},
{"role": "user", "content": "Sen kimsin?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
- Downloads last month
- 2,873
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for curiositytech/MARS
Base model
meta-llama/Meta-Llama-3-8B-InstructEvaluation results
- accuracy on AI2 Reasoning Challenge TR v0.2test set self-reported46.080
- accuracy on MMLU TR v0.2test set self-reported47.020
- accuracy on TruthfulQA TR v0.2validation set self-reported49.380
- accuracy on Winogrande TR v0.2validation set self-reported53.710
- accuracy on GSM8k TR v0.2test set self-reported53.080