QuantFactory/Llama3.1-ArrowSE-v0.4-GGUF
This is quantized version of DataPilot/Llama3.1-ArrowSE-v0.4 created using llama.cpp
Original Model Card
概要
このモデルはllama3.1-8B-instructをもとに日本語性能を高めることを目的にMergekit&ファインチューニングを用いて作成されました。
meta,ELYZA,nvidiaの皆様に感謝します。
how to use
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"
text = "Vtuberとして成功するために大切な5つのことを小学生にでもわかるように教えてください。"
model_name = "DataPilot/Llama3.1-ArrowSE-v0.4"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
model.eval()
messages = [
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
token_ids = tokenizer.encode(
prompt, add_special_tokens=False, return_tensors="pt"
)
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=1200,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
output = tokenizer.decode(
output_ids.tolist()[0][token_ids.size(1):], skip_special_tokens=True
)
print(output)
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using meta-llama/Meta-Llama-3.1-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: meta-llama/Meta-Llama-3.1-8B-Instruct
parameters:
weight: 1
- model: elyza/Llama-3-ELYZA-JP-8B
parameters:
weight: 0.7
- model: nvidia/Llama3-ChatQA-1.5-8B
parameters:
weight: 0.15
merge_method: ties
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
parameters:
normalize: false
dtype: bfloat16
- Downloads last month
- 328
Model tree for QuantFactory/Llama3.1-ArrowSE-v0.4-GGUF
Merge model
this model