---
language:
- ko
- en
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
tags:
- pytorch
---
## Model Description
한국어 LLM 평가 데이터셋인 kollm 을 활용하여 Supervised Fine-Tuning(a.k.a SFT) 학습한 모델입니다. 학습 데이터셋은 KoAlpaca-v1.1, kollm_kmmlu, korean-parallel-corpora, kobest_sentineg 와 같은 오픈 데이터셋으로 구성되어 있습니다. 데이터에 대한 상세한 설명은 Train Datasets 링크를 참고해주세요.
## About the Model
- **Name:** TwinDoc/RedWhale-tv-10.8B-sft-g
- **Finetuned from model:** [TwinDoc/RedWhale-tv-10.8B-v1.0](https://huggingface.co/TwinDoc/RedWhale-tv-10.8B-v1.0)
- **Train Datasets:** [davidkim205/kollm-converations](https://huggingface.co/datasets/davidkim205/kollm-converations?row=33)
- **Developed by:** 애자일소다 (AGILESODA)
- **Model type:** llama
- **Language(s) (NLP):** 한국어, 영어
- **License:** cc-by-nc-sa-4.0
- **train setting**
- Lora r, alpha : 64, 16
- Dtype : bf16
- Epoch : 1
- Learning rate : 2e-4
- Global batch : 128
- Context length : 1024
- **inference setting**
- BOS id : 1
- EOS id : 2
- Top-p : 0.95
- Temperature : 0.01
## prompt template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {input}
Assistant: {output}
```
## License
The content of this project, created by AGILESODA, is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Citation
```
@misc{vo2024redwhaleadaptedkoreanllm,
title={RedWhale: An Adapted Korean LLM Through Efficient Continual Pretraining},
author={Anh-Dung Vo and Minseong Jung and Wonbeen Lee and Daewoo Choi},
year={2024},
eprint={2408.11294},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11294},
}
```
**Built with:**