File size: 1,917 Bytes
579cd1a ccfdebe 29f6d21 16c6a3a d9b5b21 16c6a3a 829c291 d3f2916 95564ae 16c6a3a e9cb845 16c6a3a 95564ae bb273c4 16c6a3a f10a62a 16c6a3a ff77fde 16c6a3a 29f6d21 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
---
license: apache-2.0
datasets:
- jojo0217/korean_rlhf_dataset
language:
- ko
pipeline_tag: text-generation
---
์ฑ๊ท ๊ด๋ํ๊ต ์ฐํํ๋ ฅ ๋ฐ์ดํฐ๋ก ๋ง๋ ํ
์คํธ ๋ชจ๋ธ์
๋๋ค.
๊ธฐ์กด 10๋ง 7์ฒ๊ฐ์ ๋ฐ์ดํฐ + 2์ฒ๊ฐ ์ผ์๋ํ ์ถ๊ฐ ๋ฐ์ดํฐ๋ฅผ ์ฒจ๊ฐํ์ฌ ํ์ตํ์์ต๋๋ค.
___
๋ชจ๋ธ์ EleutherAI/polyglot-ko-5.8b๋ฅผ base๋ก ํ์ต ๋์์ผ๋ฉฐ
ํ์ต parameter์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
batch_size: 128
micro_batch_size: 8
num_epochs: 3
learning_rate: 3e-4
cutoff_len: 1024
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
weight_decay: 0.1
___
์ธก์ ํ kobest 10shot ์ ์๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
![score](./asset/score.png)
___
๋ชจ๋ธ prompt template๋ kullm์ template๋ฅผ ์ฌ์ฉํ์์ต๋๋ค.
ํ
์คํธ ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
https://colab.research.google.com/drive/1xEHewqHnG4p3O24AuqqueMoXq1E3AlT0?usp=sharing
```
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_name="jojo0217/ChatSKKU5.8B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
load_in_8bit=True,#๋ง์ฝ ์์ํ ๋๊ณ ์ถ๋ค๋ฉด false
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=model_name,
device_map="auto"
)
def answer(message):
prompt=f"์๋๋ ์์
์ ์ค๋ช
ํ๋ ๋ช
๋ น์ด์
๋๋ค. ์์ฒญ์ ์ ์ ํ ์๋ฃํ๋ ์๋ต์ ์์ฑํ์ธ์.\n\n### ๋ช
๋ น์ด:\n{message}"
ans = pipe(
prompt + "\n\n### ์๋ต:",
do_sample=True,
max_new_tokens=512,
temperature=0.7,
repetition_penalty = 1.0,
return_full_text=False,
eos_token_id=2,
)
msg = ans[0]["generated_text"]
return msg
answer('์ฑ๊ท ๊ด๋ํ๊ต์๋ํด ์๋ ค์ค')
``` |