Llama-3.1-70B-Japanese-Instruct-2407
Model Description
This is a Japanese continually pre-trained model based on meta-llama/Meta-Llama-3.1-70B-Instruct.
Usage
Make sure to update your transformers installation via pip install --upgrade transformers
.
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model = AutoModelForCausalLM.from_pretrained("cyberagent/Llama-3.1-70B-Japanese-Instruct-2407", device_map="auto", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("cyberagent/Llama-3.1-70B-Japanese-Instruct-2407")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{"role": "user", "content": "AIによって私たちの暮らしはどのように変わりますか?"}
]
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
output_ids = model.generate(input_ids,
max_new_tokens=1024,
streamer=streamer)
Prompt Format
Llama 3.1 Format
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ assistant_message_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
License
Meta Llama 3.1 Community License
Author
How to cite
@misc{cyberagent-llama-3.1-70b-japanese-instruct-2407,
title={cyberagent/Llama-3.1-70B-Japanese-Instruct-2407},
url={https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407},
author={Ryosuke Ishigami},
year={2024},
}
Citations
@article{llama3.1modelcard,
title = {Llama 3.1 Model Card},
author = {AI@Meta},
year = {2024},
url = {https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md}
}
- Downloads last month
- 1,251