File size: 3,438 Bytes
4e59a4f
 
59c2929
 
 
 
 
 
4e59a4f
59c2929
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e3bea9
59c2929
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c9e103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29c8dbd
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: llama2
tags:
- text2text-generation
pipeline_tag: text2text-generation
language:
- zh
- en
---

# Model Card for Model ID

## Welcome
If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE !

## Model description
This model is obtained by fine-tuning the complete parameters using 0.4M Chinese instruction data on the original Llama2-13B-chat. 
We firmly believe that the original Llama2-chat exhibits commendable performance post Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF).
Our pursuit continues to be the further enhancement of this model using Chinese instructional data for fine-tuning, with an aspiration to facilitate stable and high-quality 
Chinese language outputs.
## Use model
Please note that the input should be formatted as follows in both **training** and **inference**.
``` python
Human: \n{input}\n\nAssistant:\n
``` 


After you decrypt the files, BELLE-Llama2-13B-chat-0.4M can be easily loaded with AutoModelForCausalLM.
``` python
from transformers import AutoModelForCausalLM, LlamaTokenizer
import torch

ckpt = '/path/to_finetuned_model/'
device = torch.device('cuda')
model = AutoModelForCausalLM.from_pretrained(ckpt).half().to(device)
tokenizer = LlamaTokenizer.from_pretrained(ckpt)
prompt = "Human: \n写一首中文歌曲,赞美大自然 \n\nAssistant: \n"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generate_ids = model.generate(input_ids, max_new_tokens=1024, do_sample=True, top_k=30, top_p=0.85, temperature=0.5, repetition_penalty=1.2, eos_token_id=2, bos_token_id=1, pad_token_id=0)
output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = output[len(prompt):]
print(response)

```


## Limitations
There still exists a few issues in the model trained on current base model and data:

1. The model might generate factual errors when asked to follow instructions related to facts.

2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions.

3. Needs improvements on reasoning and coding.

Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.

## Citation

Please cite our paper and github when using our code, data or model.

```
@misc{BELLE,
  author = {BELLEGroup},
  title = {BELLE: Be Everyone's Large Language model Engine},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BELLE-2__BELLE-Llama2-13B-chat-0.4M)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 53.77   |
| ARC (25-shot)         | 60.67          |
| HellaSwag (10-shot)   | 82.31    |
| MMLU (5-shot)         | 55.94         |
| TruthfulQA (0-shot)   | 50.85   |
| Winogrande (5-shot)   | 75.53   |
| GSM8K (5-shot)        | 14.4        |
| DROP (3-shot)         | 36.7         |