Edit model card

DataVortexS-10.7B-v0.4

DataVortex

Our Team

Research & Engineering Product Management
Kwangseok Yang Seunghyun Choi
Jeongwon Choi Hyoseok Choi

Model Details

Base Model

LDCC/LDCC-SOLAR-10.7B

Trained On

  • OS: Ubuntu 20.04
  • GPU: H100 80GB 2ea
  • transformers: v4.36.2

Dataset

Instruction format

It follows Alpaca format.

E.g.

text = """\
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.

### Instruction:
대한민국의 수도는 어디야?

### Response:
대한민국의 수도는 서울입니다.

### Instruction:
서울 인구는 총 몇 명이야?
"""

Model Benchmark

Ko LM Eval Harness

Task 0-shot 5-shot 10-shot 50-shot
kobest_boolq 0.389066 0.912924 0.912808 0.906428
kobest_copa 0.744865 0.747742 0.768856 0.785896
kobest_hellaswag 0.455793 0.443909 0.465783 0.472771
kobest_sentineg 0.584156 0.947082 0.962216 0.954657
Average 0.54347 0.76291425 0.77741575 0.779938

Ko-LLM-Leaderboard

Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
54.15 49.4 59.7 54.63 47.5 59.5

Implementation Code

This model contains the chat_template instruction format.
You can use the code below.

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.4")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.4")

messages = [
    {"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
    {"role": "user", "content": "대한민국의 수도는 어디야?"},
    {"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
    {"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

License

The model is licensed under the cc-by-nc-sa-4.0 license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.

Downloads last month
4,004
Safetensors
Model size
10.9B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Edentns/DataVortexS-10.7B-v0.4

Finetuned
(9)
this model

Datasets used to train Edentns/DataVortexS-10.7B-v0.4

Collection including Edentns/DataVortexS-10.7B-v0.4