File size: 3,276 Bytes
daf4ff7 4c869cc 0ec7f52 4c869cc daf4ff7 cce6f01 0cdb686 daf4ff7 4c869cc daf4ff7 ba3a216 daf4ff7 ba3a216 daf4ff7 ba3a216 daf4ff7 9d05c14 9d2939f 9d05c14 9d2939f 9d05c14 daf4ff7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
language:
- ko
datasets:
- DopeorNope/DPO-Ko-Dataset
- DopeorNope/Orca_Near_Dedup-v2
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄으로 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **🐻❄️COKAL-DPO_13b-v2🐻❄️**
![img](https://drive.google.com/uc?export=view&id=1YGBxz-UhQGHZ2K6cTXmTnB13fRgaQilX)
## Model Details
**Model Developers** Seungyoo Lee (DopeorNope)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
COKAL-DPO_13b-v2 is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.
**Base Model** [DopeorNope/COKAL_pre_DPO_Test_v2-13b](https://huggingface.co/DopeorNope/COKAL_pre_DPO_Test_v2-13b)
DopeorNope/COKAL_pre_DPO_Test_v2-13b is the SFT model to train with DPO methodology.
**Training Dataset**
- DPO training dataset: [DopeorNope/DPO-Ko-Dataset](private) - private
This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from ["lvwerra/stack-exchange-paired"](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)
- SFT training dataset: [DopeorNope/Orca_Near_Dedup-v2](private) - private
This dataset is based on ["kyujinpy/OpenOrca-KO"](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified.
**Training**
The difference between "DopeorNope/COKAL-DPO_test-v2" and this model is that this model has different hyper-parameters from the one in that setting regarding the final version.
I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04.
It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.
**Reference papers**
- Data Strategy:
- [LIMA(Zhou et al., 2023)](https://arxiv.org/abs/2305.11206)
- [Near Dedup algorithm(Lee et al., 2022)](https://arxiv.org/abs/2107.06499)
- Model Architecture:
- [Llama2(Touvron et al., 2023)](https://arxiv.org/abs/2307.09288)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "HumanF-MarkrAI/COKAL-DPO-13b-v2"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
```
# Acknowledgement
- 이 모델은 과학기술정보통신부·광주광역시가 공동 지원한 '인공지능 중심 산업융합 집적단지 조성사업'으로 지원을 받아 수행된 연구 결과입니다.
- This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.
--- |