Edit model card

DataVortexS-10.7B-dpo-v1.11

DataVortex

Our Team

Research & Engineering Product Management
Kwangseok Yang Seunghyun Choi
Jeongwon Choi Hyoseok Choi

Model Details

Base Model

LDCC/LDCC-SOLAR-10.7B

Trained On

  • OS: Ubuntu 22.04
  • GPU: H100 80GB 4ea
  • transformers: v4.36.2

Instruction format

It follows Alpaca (Chat) format.

E.g.

text = """\
### System:
당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€.

### User:
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?

### Assistant:
λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€.

### User:
μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?
"""

Model Benchmark

Ko LM Eval Harness

Task 0-shot 5-shot 10-shot 50-shot
kobest_boolq 0.920101 0.928018 0.933025 0.928754
kobest_copa 0.721782 0.801936 0.817737 0.84093
kobest_hellaswag 0.44502 0.482783 0.483978 0.48978
kobest_sentineg 0.51398 0.931928 0.944556 0.934475
Average 0.650221 0.786166 0.794824 0.798485

Ko-LLM-Leaderboard

Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
59.56 55.97 68.68 52.67 66.74 53.72

Implementation Code

This model contains the chat_template instruction format.
You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.11")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.11")

messages = [
    {"role": "system", "content": "당신은 μ‚¬λžŒλ“€μ΄ 정보λ₯Ό 찾을 수 μžˆλ„λ‘ λ„μ™€μ£ΌλŠ” 인곡지λŠ₯ λΉ„μ„œμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μ•Ό?"},
    {"role": "assistant", "content": "λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„λŠ” μ„œμšΈμž…λ‹ˆλ‹€."},
    {"role": "user", "content": "μ„œμšΈ μΈκ΅¬λŠ” 총 λͺ‡ λͺ…이야?"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

License

This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.

Downloads last month
1,719
Safetensors
Model size
10.9B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Edentns/DataVortexS-10.7B-dpo-v1.11

Finetuned
(9)
this model
Adapters
1 model
Finetunes
1 model

Collection including Edentns/DataVortexS-10.7B-dpo-v1.11