DataVortexS_dpov3 / README.md
jingyeom's picture
Update README.md
385552b verified
metadata
license: cc-by-nc-4.0
base_model: Edentns/DataVortexS-10.7B-dpo-v1.11
tags:
  - trl
  - dpo
  - generated_from_trainer
model-index:
  - name: nhn_dpo_v3_DataVortexS-10.7B-dpo-v1.11_DPO
    results: []

ENERGY-DRINK-LOVE/DataVortexS_dpov3

Our Team

  • Youjin Chung
  • Jingyeom Kim

Model

Base Model

Hardware and Software

  • Hardware: A100 * 8 for training our model
  • Deepspeed library & Huggingface TRL Trainer

Dataset

  • DPO_dataset
    • ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ)
    • OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ)

Training Method

Benchmark

Ko LM Eval Harness

Ko-LLM-Leaderboard

  • (240316๊ธฐ์ค€ 7๋“ฑ)
  • image/png
Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
60.18 56.23 69.15 52.76 67.87 54.9