TwinDoc's picture
Update README.md
13fd236 verified
---
language:
- ko
- en
library_name: transformers
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
tags:
- pytorch
---
<!-- Provide a quick summary of what the model is/does. -->
## Model Description
<!-- Provide a longer summary of what this model is. -->
ν•œκ΅­μ–΄ LLM 평가 데이터셋인 kollm 을 ν™œμš©ν•˜μ—¬ Supervised Fine-Tuning(a.k.a SFT) ν•™μŠ΅ν•œ λͺ¨λΈμž…λ‹ˆλ‹€. ν•™μŠ΅ 데이터셋은 KoAlpaca-v1.1, kollm_kmmlu, korean-parallel-corpora, kobest_sentineg 와 같은 μ˜€ν”ˆ λ°μ΄ν„°μ…‹μœΌλ‘œ κ΅¬μ„±λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. 데이터에 λŒ€ν•œ μƒμ„Έν•œ μ„€λͺ…은 Train Datasets 링크λ₯Ό μ°Έκ³ ν•΄μ£Όμ„Έμš”.
## About the Model
- **Name:** TwinDoc/RedWhale-tv-10.8B-sft-g
- **Finetuned from model:** [TwinDoc/RedWhale-tv-10.8B-v1.0](https://huggingface.co/TwinDoc/RedWhale-tv-10.8B-v1.0)
- **Train Datasets:** [davidkim205/kollm-converations](https://huggingface.co/datasets/davidkim205/kollm-converations?row=33)
- **Developed by:** μ• μžμΌμ†Œλ‹€ (AGILESODA)
- **Model type:** llama
- **Language(s) (NLP):** ν•œκ΅­μ–΄, μ˜μ–΄
- **License:** cc-by-nc-sa-4.0
- **train setting**
- Lora r, alpha : 64, 16
- Dtype : bf16
- Epoch : 1
- Learning rate : 2e-4
- Global batch : 128
- Context length : 1024
- **inference setting**
- BOS id : 1
- EOS id : 2
- Top-p : 0.95
- Temperature : 0.01
## prompt template
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {input}
Assistant: {output}
```
## License
<img src="https://huggingface.co/TwinDoc/agilesoda-model-x/resolve/main/license__icon.png" width="324">
The content of this project, created by AGILESODA, is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Citation
[Being updated]