Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# COIG-Kun Label Model
|
3 |
+
|
4 |
+
## Model Details
|
5 |
+
- **Name:** Label Model
|
6 |
+
- **Release Date:** 2023.12.04
|
7 |
+
- **Github URL:** [Label Model on Huggingface](https://github.com/Zheng0428/COIG-Kun)
|
8 |
+
- **Developers:** Tianyu Zheng*, Shuyue Guo*, Xingwei Qu, Xinrun Du, Wenhu Chen, Jie Fu, Wenhao Huang, Ge Zhang
|
9 |
+
|
10 |
+
## Model Description
|
11 |
+
The Label Model is a part of the Kun project, which aims to enhance language model training through a novel data augmentation paradigm, leveraging principles of self-alignment and instruction backtranslation. The model is specifically fine-tuned to generate high-quality instructional data, a critical component in the project's approach to data augmentation and language model training.
|
12 |
+
|
13 |
+
## Intended Use
|
14 |
+
- **Primary Use:** The Label Model is designed for generating instructional data to fine-tune language models.
|
15 |
+
- **Target Users:** Researchers and developers in NLP and ML, particularly those working on language model training and data augmentation.
|
16 |
+
|
17 |
+
## Training Data
|
18 |
+
The Label Model is trained using approximately ten thousand high-quality seed instructions. These instructions were meticulously curated to ensure the effectiveness of the training process and to produce high-quality outputs for use as instructional data.
|
19 |
+
|
20 |
+
## Training Process
|
21 |
+
- **Base Model:** Yi-34B
|
22 |
+
- **Epochs:** 6
|
23 |
+
- **Learning Rate:** 1e-5
|
24 |
+
- **Fine-Tuning Method:** The model was fine-tuned on high-quality seed instructions, with the responses to these instructions used as outputs and the instructions themselves as inputs.
|
25 |
+
|
26 |
+
## Evaluation
|
27 |
+
The Label Model was evaluated on its ability to generate high-quality instructional data, focusing on the relevancy, clarity, and usability of the instructions for language model training.
|
28 |
+
|
29 |
+
## Limitations
|
30 |
+
- The Label Model is optimized for Chinese and English instructional data generation.
|
31 |
+
- The effectiveness of the model may vary based on the quality of the input seed data.
|
32 |
+
|
33 |
+
## Ethical Considerations
|
34 |
+
- Users should be aware of potential biases in the training data, which could be reflected in the model's outputs.
|
35 |
+
- The model should not be used for generating harmful or misleading content.
|
36 |
+
|
37 |
+
## Citing the Model
|
38 |
+
To cite the Label Model in academic work, please use the following reference:
|
39 |
+
|
40 |
+
```bibtex
|
41 |
+
@misc{COIG-Kun,
|
42 |
+
title={Kun: Answer Polishment Saves Your Time for Using Intruction Backtranslation on Self-Alignment},
|
43 |
+
author={Tianyu, Zheng* and Shuyue, Guo* and Xingwei, Qu and Xinrun, Du and Wenhu, Chen and Jie, Fu and Wenhao, Huang and Ge, Zhang},
|
44 |
+
year={2023},
|
45 |
+
publisher={GitHub},
|
46 |
+
journal={GitHub repository},
|
47 |
+
howpublished={https://github.com/Zheng0428/COIG-Kun}
|
48 |
+
}
|
49 |
+
```
|
50 |
+
|