Camille7777
commited on
Commit
•
930487b
1
Parent(s):
ceb0828
Update README.md
Browse files
README.md
CHANGED
@@ -58,12 +58,13 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
58 |
|
59 |
model = AutoModelForCausalLM.from_pretrained("hpcai-tech/Colossal-LLaMA-2-7b-base", device_map="auto", trust_remote_code=True)
|
60 |
tokenizer = AutoTokenizer.from_pretrained("hpcai-tech/Colossal-LLaMA-2-7b-base", trust_remote_code=True)
|
61 |
-
input = "
|
62 |
inputs = tokenizer(input, return_tensors='pt')
|
63 |
inputs = inputs.to('cuda:0')
|
64 |
pred = model.generate(**inputs,
|
65 |
-
max_new_tokens=
|
66 |
do_sample=True,
|
|
|
67 |
top_k=50,
|
68 |
top_p=0.95,
|
69 |
num_return_sequences=1)
|
@@ -73,26 +74,27 @@ print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)[len(input):])
|
|
73 |
|
74 |
# Performance Evaluation
|
75 |
### Performance Evaluation
|
76 |
-
We conducted comprehensive evaluation on 4
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
The generation config for all dataset is greedy search.
|
83 |
-
|
|
|
|
|
84 |
|
85 |
| | Backbone | Tokens Consumed | | MMLU | CMMLU | AGIEval | GAOKAO | CEval |
|
86 |
-
| :----------------------------: | :--------: | :-------------: | :------------------: | :-----------: | :-----: | :----: | :----: |
|
87 |
-
| |
|
88 |
| Baichuan-7B | - | 1.2T | | 42.32 (42.30) | 44.53 (44.02) | 38.72 | 36.74 | 42.80 |
|
89 |
-
| Baichuan-13B-Base | - | 1.4T | | 50.51 (51.60) | 55.73 (55.30) | 47.20 | 51.41 | 53.60 |
|
90 |
| Baichuan2-7B-Base | - | 2.6T | | 46.97 (54.16) | 57.67 (57.07) | 45.76 | 52.60 | 54.00 |
|
91 |
-
| Baichuan2-13B-Base | - | 2.6T | | 54.84 (59.17) | 62.62 (61.97) | 52.08 | 58.25 | 58.10 |
|
92 |
| ChatGLM-6B | - | 1.0T | | 39.67 (40.63) | 41.17 (-) | 40.10 | 36.53 | 38.90 |
|
93 |
| ChatGLM2-6B | - | 1.4T | | 44.74 (45.46) | 49.40 (-) | 46.36 | 45.49 | 51.70 |
|
94 |
-
| InternLM-7B | - |
|
95 |
-
| Qwen-7B | - | 2.2T | | 54.29 (56.70) | 56.03 (58.80) | 52.47 | 56.42 | 59.60 |
|
|
|
96 |
| | | | | | | | | |
|
97 |
| Llama-2-7B | - | 2.0T | | 44.47 (45.30) | 32.97 (-) | 32.60 | 25.46 | - |
|
98 |
| Linly-AI/Chinese-LLaMA-2-7B-hf | Llama-2-7B | 1.0T | | 37.43 | 29.92 | 32.00 | 27.57 | - |
|
@@ -101,15 +103,14 @@ The generation config for all dataset is greedy search.
|
|
101 |
| TigerResearch/tigerbot-7b-base | Llama-2-7B | 0.3T | | 43.73 | 42.04 | 37.64 | 30.61 | - |
|
102 |
| LinkSoul/Chinese-Llama-2-7b | Llama-2-7B | - | | 48.41 | 38.31 | 38.45 | 27.72 | - |
|
103 |
| FlagAlpha/Atom-7B | Llama-2-7B | 0.1T | | 49.96 | 41.10 | 39.83 | 33.00 | - |
|
104 |
-
| IDEA-CCNL/Ziya-LLaMA-13B-v1.1 | Llama-13B | 0.11T | | 50.25 | 40.99 | 40.04 | 30.54 | - |
|
105 |
| | | | | | | | | |
|
106 |
-
| **Colossal-LLaMA-2-7b-base** | Llama-2-7B | **0.0085T** | | 53.06 | 49.89 | 51.48 | 58.82 | 50.
|
107 |
|
108 |
> The score in parentheses corresponds to the scores in the official repository of the model.
|
109 |
>
|
110 |
> We use zero-shot for ChatGLM models.
|
111 |
>
|
112 |
-
> Qwen-7B
|
113 |
>
|
114 |
> For other models and other dataset, we calculate logits over "A", "B", "C" and "D".
|
115 |
|
|
|
58 |
|
59 |
model = AutoModelForCausalLM.from_pretrained("hpcai-tech/Colossal-LLaMA-2-7b-base", device_map="auto", trust_remote_code=True)
|
60 |
tokenizer = AutoTokenizer.from_pretrained("hpcai-tech/Colossal-LLaMA-2-7b-base", trust_remote_code=True)
|
61 |
+
input = "明月松间照,\n\n->\n\n"
|
62 |
inputs = tokenizer(input, return_tensors='pt')
|
63 |
inputs = inputs.to('cuda:0')
|
64 |
pred = model.generate(**inputs,
|
65 |
+
max_new_tokens=512,
|
66 |
do_sample=True,
|
67 |
+
temperature=0.3,
|
68 |
top_k=50,
|
69 |
top_p=0.95,
|
70 |
num_return_sequences=1)
|
|
|
74 |
|
75 |
# Performance Evaluation
|
76 |
### Performance Evaluation
|
77 |
+
We conducted comprehensive evaluation on 4 datasets and compare our Colossal-Llama-2-7b-base model with various models.
|
78 |
|
79 |
+
- We use 5-shot for MMLU and calculate scores based on the logits of first predicted token.
|
80 |
+
- We use 5-shot for CMMLU and calculate scores based on the logits of first predicted token.
|
81 |
+
- We use 5-shot for AGIEval and only calculate scores for 4-choice questions using a combination metric of exact match and the logits of first predicted token. If any of the exact match or logits of first predicted token is correct, the model will get the score.
|
82 |
+
- We use 0-shot for GAOKAO-Bench and only calculate scores for 4-choice questions based on the logits of first predicted token.
|
83 |
+
- The generation config for all dataset is greedy search.
|
84 |
+
- We also provided CEval scores from its latest leaderboard or the official repository of the model.
|
85 |
+
|
86 |
+
More details about metrics can be found in [Metrics](https://github.com/hpcaitech/ColossalAI/tree/main/applications/ColossalEval#metrics).
|
87 |
|
88 |
| | Backbone | Tokens Consumed | | MMLU | CMMLU | AGIEval | GAOKAO | CEval |
|
89 |
+
| :----------------------------: | :--------: | :-------------: | :------------------: | :-----------: | :-----: | :----: | :----: | :----------------------------: |
|
90 |
+
| | - | - | | 5-shot | 5-shot | 5-shot | 0-shot | 5-shot |
|
91 |
| Baichuan-7B | - | 1.2T | | 42.32 (42.30) | 44.53 (44.02) | 38.72 | 36.74 | 42.80 |
|
|
|
92 |
| Baichuan2-7B-Base | - | 2.6T | | 46.97 (54.16) | 57.67 (57.07) | 45.76 | 52.60 | 54.00 |
|
|
|
93 |
| ChatGLM-6B | - | 1.0T | | 39.67 (40.63) | 41.17 (-) | 40.10 | 36.53 | 38.90 |
|
94 |
| ChatGLM2-6B | - | 1.4T | | 44.74 (45.46) | 49.40 (-) | 46.36 | 45.49 | 51.70 |
|
95 |
+
| InternLM-7B | - | - | | 46.70 (51.00) | 52.00 (-) | 44.77 | 61.64 | 52.80 |
|
96 |
+
| Qwen-7B (original) | - | 2.2T | | 54.29 (56.70) | 56.03 (58.80) | 52.47 | 56.42 | 59.60 |
|
97 |
+
| Qwen-7B | - | 2.4T | | 58.33 (58.20) | 62.54 (62.20) | 64.34 | 74.05 | 63.50 |
|
98 |
| | | | | | | | | |
|
99 |
| Llama-2-7B | - | 2.0T | | 44.47 (45.30) | 32.97 (-) | 32.60 | 25.46 | - |
|
100 |
| Linly-AI/Chinese-LLaMA-2-7B-hf | Llama-2-7B | 1.0T | | 37.43 | 29.92 | 32.00 | 27.57 | - |
|
|
|
103 |
| TigerResearch/tigerbot-7b-base | Llama-2-7B | 0.3T | | 43.73 | 42.04 | 37.64 | 30.61 | - |
|
104 |
| LinkSoul/Chinese-Llama-2-7b | Llama-2-7B | - | | 48.41 | 38.31 | 38.45 | 27.72 | - |
|
105 |
| FlagAlpha/Atom-7B | Llama-2-7B | 0.1T | | 49.96 | 41.10 | 39.83 | 33.00 | - |
|
|
|
106 |
| | | | | | | | | |
|
107 |
+
| **Colossal-LLaMA-2-7b-base** | Llama-2-7B | **0.0085T** | | 53.06 | 49.89 | 51.48 | 58.82 | 50.20 |
|
108 |
|
109 |
> The score in parentheses corresponds to the scores in the official repository of the model.
|
110 |
>
|
111 |
> We use zero-shot for ChatGLM models.
|
112 |
>
|
113 |
+
> To evaluate Qwen-7B on dataset MMLU, the prompt would be "xxx Answer:"(remove the space after ":") and we calculate the logits over " A", " B", " C" and " D" for Qwen-7B. Both the original and updated versions of Qwen-7B tend to be much more deterministic than other models. For example, the logits over " A" can be `-inf` and softmax would be exact `0`.
|
114 |
>
|
115 |
> For other models and other dataset, we calculate logits over "A", "B", "C" and "D".
|
116 |
|