Model Details
๊ธฐ์กด meta-llama/Meta-Llama-3.1-8B-Instruct ๋ชจ๋ธ์ 32๊ฐ layer์ค 10๊ฐ layer๋ฅผ ์ญ์ ํ๊ณ ํ์ตํ ๋ชจ๋ธ์ ๋๋ค
Uses
import transformers
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kikikara/ko-llama-3.1-5b-instruct")
model = AutoModelForCausalLM.from_pretrained("kikikara/ko-llama-3.1-5b-instruct", device_map="auto")
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
)
question = "์ ์ด์์ผ ํ๋์ง ์ฒ ํ์ ์ธก๋ฉด์์ ์ ๊ทผํด๋ด"
messages = [
{"role": "system", "content": "๋น์ ์ ํ๊ตญ์ด ai ๋ชจ๋ธ์
๋๋ค."},
{"role": "user", "content": question},
]
outputs = pipeline(
messages,
repetition_penalty=1.1,
max_new_tokens=1500,
)
print(outputs[0]["generated_text"][-1]['content'])
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.