Model Description
νκ΅μ΄ LLM νκ° λ°μ΄ν°μ μΈ kollm μ νμ©νμ¬ Supervised Fine-Tuning(a.k.a SFT) νμ΅ν λͺ¨λΈμ λλ€. νμ΅ λ°μ΄ν°μ μ KoAlpaca-v1.1, kollm_kmmlu, korean-parallel-corpora, kobest_sentineg μ κ°μ μ€ν λ°μ΄ν°μ μΌλ‘ ꡬμ±λμ΄ μμ΅λλ€. λ°μ΄ν°μ λν μμΈν μ€λͺ μ Train Datasets λ§ν¬λ₯Ό μ°Έκ³ ν΄μ£ΌμΈμ.
About the Model
Name: TwinDoc/RedWhale-tv-10.8B-sft-g
Finetuned from model: TwinDoc/RedWhale-tv-10.8B-v1.0
Train Datasets: davidkim205/kollm-converations
Developed by: μ μμΌμλ€ (AGILESODA)
Model type: llama
Language(s) (NLP): νκ΅μ΄, μμ΄
License: cc-by-nc-sa-4.0
train setting
- Lora r, alpha : 64, 16
- Dtype : bf16
- Epoch : 1
- Learning rate : 2e-4
- Global batch : 128
- Context length : 1024
inference setting
- BOS id : 1
- EOS id : 2
- Top-p : 0.95
- Temperature : 0.01
prompt template
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {input}
Assistant: {output}
License
The content of this project, created by AGILESODA, is licensed under the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Citation
[Being updated]
- Downloads last month
- 0