zhaicunqi commited on
Commit
9f6e680
·
verified ·
1 Parent(s): ab1c546

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +535 -0
  2. README_EN.md +528 -0
README.md CHANGED
@@ -0,0 +1,535 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="left">
2
+ 中文</a>&nbsp | &nbsp<a href="README_EN.md">English</a>&nbsp
3
+ </p>
4
+ <br>
5
+
6
+ <div align="center">
7
+ <h1>
8
+ 360智脑
9
+ </h1>
10
+ </div>
11
+ <div align="center">
12
+ 🤗 <a href="https://huggingface.co/qihoo360">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp
13
+ 🤖 <a href="https://www.modelscope.cn/profile/qihoo360">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp
14
+ 💬 <a href="./assets/WeChat.png">WeChat (微信)</a>&nbsp&nbsp
15
+ </div>
16
+ <br>
17
+ <p align="center">
18
+ 欢迎访问360智脑官网<a href="https://ai.360.com"> https://ai.360.com </a>体验更多更强大的功能。
19
+ </p>
20
+
21
+ <br>
22
+
23
+ # 项目介绍
24
+ 🎉🎉🎉我们开源了360智脑大模型的系列工作,本次开源了以下模型:
25
+ - **360Zhinao-7B-Base**
26
+ - **360Zhinao-7B-Chat-4K**
27
+ - **360Zhinao-7B-Chat-32K**
28
+ - **360Zhinao-7B-Chat-360K**
29
+
30
+ 360智脑大模型特点如下:
31
+ - **基础模型**:采用 3.4 万亿 Tokens 的高质量语料库训练,以中文、英文、代码为主,在相关基准评测中,同尺寸有竞争力。
32
+ - **对话模型**:具有强大的对话能力,开放4k、32k、360k三种不同文本长度。据了解,360k(约50万字)是当前国产开源模型文本长度最长的。
33
+
34
+ <br>
35
+
36
+ # 更新信息
37
+ - [2024.04.10] 我们发布了360Zhinao-7B 1.0版本,同时开放Base模型和4k、32k、360k三种文本长度的Chat模型。
38
+
39
+ <br>
40
+
41
+ # 目录
42
+ - [下载地址](#下载地址)
43
+ - [模型评估](#模型评估)
44
+ - [快速开始](#快速开始)
45
+ - [模型推理](#模型推理)
46
+ - [模型微调](#模型微调)
47
+ - [许可证](#许可证)
48
+
49
+ <br>
50
+
51
+ # 下载地址
52
+ 本次发布版本和下载链接见下表:
53
+ | Size | Model | BF16 | Int4|
54
+ |-|-|-|-|
55
+ | 7B | 360Zhinao-7B-Base | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Base/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Base">🤗</a> | |
56
+ | 7B | 360Zhinao-7B-Chat-4K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K-Int4">🤗</a> |
57
+ | 7B | 360Zhinao-7B-Chat-32K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K-Int4">🤗</a> |
58
+ | 7B | 360Zhinao-7B-Chat-360K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K-Int4">🤗</a> |
59
+
60
+ <br>
61
+
62
+ # 模型评估
63
+
64
+ ## 基础模型
65
+ 我们在OpenCompass的主流评测数据集上验证了我们的模型性能,包括C-Eval、AGIEval、MMLU、CMMLU、HellaSwag、MATH、GSM8K、HumanEval、MBPP、BBH、LAMBADA,考察的能力包括自然语言理解、知识、数学计算和推理、代码生成、逻辑推理等。
66
+
67
+ | Model | AVG | C-Eval | AGIEval | MMLU | CMMLU | HellaSwag | MATH | GSM8K | HumanEval | MBPP | BBH | LAMBADA |
68
+ | ----------------- | ----- | ----- | ----- | ----- | ----- | ----- | ------ | ---- | ------ | ---- | ----- | ----- |
69
+ | Baichuan2-7B | 41.49 | 56.3 | 34.6 | 54.7 | 57 | 67 | 5.4 | 24.6 | 17.7 | 24 | 41.8 | 73.3 |
70
+ | Baichuan-7B | 31.94 | 44.7 | 24.6 | 41.5 | 44.6 | 68.4 | 2.5 | 9.6 | 9.1 | 6.4 | 32.8 | 67.1 |
71
+ | ChatGLM3-6B | 58.67 | 67 | 47.4 | 62.8 | 66.5 | 76.5 | 19.2 | 61 | 44.5 | 57.2 | 66.2 | 77.1 |
72
+ | DeepSeek-7B | 39.8 | 45 | 24 | 49.3 | 46.8 | 73.4 | 4.2 | 18.3 | 25 | 36.4 | 42.8 | 72.6 |
73
+ | InternLM2-7B | 58.01 | 65.7 | 50.2 | 65.5 | 66.2 | 79.6 | 19.9 | 70.6 | 41.5 | 42.4 | 64.4 | 72.1 |
74
+ | InternLM-7B | 39.33 | 53.4 | 36.9 | 51 | 51.8 | 70.6 | 6.3 | 31.2 | 13.4 | 14 | 37 | 67 |
75
+ | LLaMA-2-7B | 33.27 | 32.5 | 21.8 | 46.8 | 31.8 | 74 | 3.3 | 16.7 | 12.8 | 14.8 | 38.2 | 73.3 |
76
+ | LLaMA-7B | 30.35 | 27.3 | 20.6 | 35.6 | 26.8 | 74.3 | 2.9 | 10 | 12.8 | 16.8 | 33.5 | 73.3 |
77
+ | Mistral-7B-v0.1 | 47.67 | 47.4 | 32.8 | 64.1 | 44.7 | 78.9 | 11.3 | 47.5 | 27.4 | 38.6 | 56.7 | 75 |
78
+ | MPT-7B | 30.06 | 23.5 | 21.3 | 27.5 | 25.9 | 75 | 2.9 | 9.1 | 17.1 | 22.8 | 35.6 | 70 |
79
+ | Qwen1.5-7B | 55.12 | 73.57 | 50.8 | 62.15 | 71.84 | 72.62 | 20.36 | 54.36 | 53.05 | 36.8 | 40.01 | 70.74 |
80
+ | Qwen-7B | 49.53 | 63.4 | 45.3 | 59.7 | 62.5 | 75 | 13.3 | 54.1 | 27.4 | 31.4 | 45.2 | 67.5 |
81
+ | XVERSE-7B | 34.27 | 61.1 | 39 | 58.4 | 60.8 | 73.7 | 2.2 | 11.7 | 4.9 | 10.2 | 31 | 24 |
82
+ | Yi-6B | 47.8 | 73 | 44.3 | 64 | 73.5 | 73.1 | 6.3 | 39.9 | 15.2 | 23.6 | 44.9 | 68 |
83
+ | 360Zhinao-7B | 56.15 | 74.11 | 49.49 | 67.44 | 72.38 | 83.05 | 16.38 | 53.83 | 35.98 | 42.4 | 43.95 | 78.59 |
84
+
85
+ 以上结果,在官方[Opencompass](https://rank.opencompass.org.cn/leaderboard-llm)上可查询或可复现。
86
+
87
+ ## Chat模型
88
+
89
+ 我们在多种长度和多种任务的评测Benchmark上验证不同版本模型的性能。
90
+
91
+ - ### Chat模型对话能力评测
92
+ 为了验证模型多轮对话效果,这里我们使用了MT-Bench数据集。[MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge)一个由80个高质量的多轮对话问题组成的基准,旨在测试多轮对话和指令遵循能力。在数据集的构造上,确定了8个常见的用户提示类别:写作、角色扮演、提取、推理、数学、编码、知识I(STEM)和知识II(人文/社会科学)。每个类别,手动设计了10个多轮的问题,每一轮有2个问题,问题评分使用GPT4进行自动打分机制。
93
+ | Model | turn1 | turn2 | average |
94
+ | -------------------- | --------- | -------- | --------- |
95
+ | Qwen7b-chat | 6.5725 | 5.4000 | 5.9862 |
96
+ | Baichuan2-7B-Chat | 6.4562 | 5.5562 | 6.0062 |
97
+ | InternLM-7B-Chat | 5.5625 | 4.0696 | 4.8207 |
98
+ | Llama2-7B-Chat | 0 | 0 | 0 |
99
+ | 360Zhinao-7B-Chat | 6.5062 | 5.8762 | 6.1962 |
100
+
101
+ - ### Chat模型长文本能力评测
102
+
103
+ 为了验证长序列的效果,这里我们使用了LongBench数据集。[LongBench](https://github.com/THUDM/LongBench)是第一个多任务、中英双语、针对大语言模型长文本理解能力的评测基准。LongBench由六大类、二十一个不同的任务组成,我们选择其中与中文长文本应用最密切相关的中文单文档问答、多文档问答、摘要、Few shot等任务场景进行评测。
104
+
105
+ | Model | Avg | 单文档QA | 多文档QA | 摘要 | Few-shot学习 | 代码补全 |
106
+ | -------------------- | --------- | -------- | --------- | --------- | ------------ | --------- |
107
+ | GPT-3.5-Turbo-16k | 37.84 | 61.2 | 28.7 | 16 | 29.2 | 54.1 |
108
+ | ChatGLM2-6B-32k | 37.16 | 51.6 | 37.6 | 16.2 | 27.7 | 52.7 |
109
+ | ChatGLM3-6B-32k | 44.62 | **62.3** | 44.8 | 17.8 | 42 | 56.2 |
110
+ | InternLM2-Chat-7B | 42.20 | 56.65 | 29.15 | **17.99** | 43.5 | **63.72** |
111
+ | Qwen1.5-Chat-7B | 36.75 | 52.85 | 30.08 | 14.28 | 32 | 54.55 |
112
+ | 360Zhinao-7B-Chat-32k | **45.18** | 57.18 | **48.06** | 15.03 | **44** | 61.64 |
113
+ | 360Zhinao-7B-Chat-360k | 39.25 | 52.82 | 48.01 | 14.4 | 41.25 | 39.75 |
114
+
115
+ - ### 360Zhinao-7B-Chat-360k“大海捞针”测试
116
+
117
+ 大海捞针测试([NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack/blob/main/LLMNeedleHaystackTester.py))是将关键信息插入一段长文本的不同位置,再对该关键信息提问,从而测试大模型的长文本能力的一种方法。
118
+
119
+ 360Zhinao-7B-Chat-360k在中英文大海捞针中都能达到98%以上的准确率。
120
+
121
+ - 英文"大海捞针"(和[NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack/blob/main/LLMNeedleHaystackTester.py)相同)
122
+
123
+ <p align="center">
124
+ <img src="assets/360Zhinao-7B-Chat-360K.en_score.png" width="600" />
125
+ <p>
126
+
127
+ **针**:The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.
128
+
129
+ **提问**:What is the best thing to do in San Francisco?
130
+
131
+
132
+ - 中文“大海捞针”
133
+
134
+ <p align="center">
135
+ <img src="assets/360Zhinao-7B-Chat-360K.zh_score.png" width="600" />
136
+ <p>
137
+
138
+ 我们仿照[SuperCLUE-200K测评基准](https://mp.weixin.qq.com/s/QgoRf2LB-7vc3vTFOHJkpw)构造了中文大海捞针:
139
+
140
+ **海**:长篇小说。
141
+
142
+ **针**:王莽是一名勤奋的店员,他每天凌晨就起床,赶在第一缕阳光照亮大地之前到达店铺,为即将开始的一天做准备。他清扫店铺,整理货架,为顾客提供方便。他对五金的种类和用途了如指掌,无论顾客需要什么,他总能准确地找到。\n然而,他的老板刘秀却总是对他吹毛求疵。刘秀是个挑剔的人,他总能在王莽的工作中找出一些小错误,然后以此为由扣他的工资。他对王莽的工作要求非常严格,甚至有些过分。即使王莽做得再好,刘秀也总能找出一些小问题,让王莽感到非常沮丧。\n王莽虽然对此感到不满,但他并没有放弃。他知道,只有通过自己的努力,才能获得更好的生活。他坚持每天早起,尽管他知道那天可能会再次被刘秀扣工资。他始终保持微笑,尽管他知道刘秀可能会再次对他挑剔。
143
+
144
+ **提问**:王莽在谁的手下工作?
145
+
146
+ <br>
147
+
148
+ # 快速开始
149
+ 简单的示例来说明如何利用🤖 ModelScope和🤗 Transformers快速使用360Zhinao-7B-Base和360Zhinao-7B-Chat
150
+
151
+ ## 依赖安装
152
+ - python 3.8 and above
153
+ - pytorch 2.0 and above
154
+ - transformers 4.37.2 and above
155
+ - CUDA 11.4 and above are recommended.
156
+
157
+ ```shell
158
+ pip install -r requirements.txt
159
+ ```
160
+ 我们推荐安装flash-attention(当前已支持flash attention 2)来提高你的运行效率以及降低显存占用。(flash-attention只是可选项,不安装也可正常运行该项目)
161
+
162
+ >flash-attn >= 2.3.6
163
+ ```shell
164
+ FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn==2.3.6
165
+ ```
166
+
167
+
168
+ ## 🤗 Transformers
169
+ ### Base模型推理
170
+
171
+ 此代码演示使用transformers快速使用360Zhinao-7B-Base模型进行推理
172
+ ```python
173
+ from transformers import AutoTokenizer, AutoModelForCausalLM
174
+ from transformers.generation import GenerationConfig
175
+
176
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
177
+
178
+ tokenizer = AutoTokenizer.from_pretrained(
179
+ MODEL_NAME_OR_PATH,
180
+ trust_remote_code=True)
181
+
182
+ model = AutoModelForCausalLM.from_pretrained(
183
+ MODEL_NAME_OR_PATH,
184
+ device_map="auto",
185
+ trust_remote_code=True)
186
+
187
+ generation_config = GenerationConfig.from_pretrained(
188
+ MODEL_NAME_OR_PATH,
189
+ trust_remote_code=True)
190
+
191
+ inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
192
+ inputs = inputs.to(model.device)
193
+
194
+ pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
195
+ print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
196
+ ```
197
+
198
+ ### Chat模型推理
199
+
200
+ 此代码演示使用transformers快速使用360Zhinao-7B-Chat-4K模型进行推理
201
+ ```python
202
+ from transformers import AutoTokenizer, AutoModelForCausalLM
203
+ from transformers.generation import GenerationConfig
204
+
205
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
206
+
207
+ tokenizer = AutoTokenizer.from_pretrained(
208
+ MODEL_NAME_OR_PATH,
209
+ trust_remote_code=True)
210
+
211
+ model = AutoModelForCausalLM.from_pretrained(
212
+ MODEL_NAME_OR_PATH,
213
+ device_map="auto",
214
+ trust_remote_code=True)
215
+
216
+ generation_config = GenerationConfig.from_pretrained(
217
+ MODEL_NAME_OR_PATH,
218
+ trust_remote_code=True)
219
+
220
+ messages = []
221
+ #round-1
222
+ messages.append({"role": "user", "content": "介绍一下刘德华"})
223
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
224
+ messages.append({"role": "assistant", "content": response})
225
+ print(messages)
226
+
227
+ #round-2
228
+ messages.append({"role": "user", "content": "他有什么代表作?"})
229
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
230
+ messages.append({"role": "assistant", "content": response})
231
+ print(messages)
232
+ ```
233
+
234
+ ## 🤖 ModelScope
235
+ ### Base模型推理
236
+
237
+ 此代码演示使用ModelScope快速使用360Zhinao-7B-Base模型进行推理
238
+
239
+
240
+ ```python
241
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
242
+ from modelscope import GenerationConfig
243
+
244
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
245
+
246
+ tokenizer = AutoTokenizer.from_pretrained(
247
+ MODEL_NAME_OR_PATH,
248
+ trust_remote_code=True)
249
+
250
+ model = AutoModelForCausalLM.from_pretrained(
251
+ MODEL_NAME_OR_PATH,
252
+ device_map="auto",
253
+ trust_remote_code=True)
254
+
255
+ generation_config = GenerationConfig.from_pretrained(
256
+ MODEL_NAME_OR_PATH,
257
+ trust_remote_code=True)
258
+
259
+ inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
260
+ inputs = inputs.to(model.device)
261
+
262
+ pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
263
+ print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
264
+ ```
265
+
266
+ ### Chat模型推理
267
+
268
+ 此代码演示使用ModelScope快速使用360Zhinao-7B-Chat-4K模型进行推理
269
+ ```python
270
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
271
+ from modelscope import GenerationConfig
272
+
273
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
274
+
275
+ tokenizer = AutoTokenizer.from_pretrained(
276
+ MODEL_NAME_OR_PATH,
277
+ trust_remote_code=True)
278
+
279
+ model = AutoModelForCausalLM.from_pretrained(
280
+ MODEL_NAME_OR_PATH,
281
+ device_map="auto",
282
+ trust_remote_code=True)
283
+
284
+ generation_config = GenerationConfig.from_pretrained(
285
+ MODEL_NAME_OR_PATH,
286
+ trust_remote_code=True)
287
+
288
+ messages = []
289
+ #round-1
290
+ messages.append({"role": "user", "content": "介绍一下刘德华"})
291
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
292
+ messages.append({"role": "assistant", "content": response})
293
+ print(messages)
294
+
295
+ #round-2
296
+ messages.append({"role": "user", "content": "他有什么代表作?"})
297
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
298
+ messages.append({"role": "assistant", "content": response})
299
+ print(messages)
300
+ ```
301
+
302
+ ## 终端 Demo
303
+ 可��用终端交互实现快速体验
304
+ ```shell
305
+ python cli_demo.py
306
+ ```
307
+ <p align="center">
308
+ <img src="assets/cli_demo.gif" width="600" />
309
+ <p>
310
+
311
+ ## 网页 Demo
312
+ 也可使用网页交互实现快速体验
313
+ ```shell
314
+ streamlit run web_demo.py
315
+ ```
316
+ <p align="center">
317
+ <img src="assets/web_demo.gif" width="600" />
318
+ <p>
319
+
320
+ ## API Demo
321
+ 启动命令
322
+ ```shell
323
+ python openai_api.py
324
+ ```
325
+
326
+ 请求参数
327
+ ```shell
328
+ curl --location --request POST 'http://localhost:8360/v1/chat/completions' \
329
+ --header 'Content-Type: application/json' \
330
+ --data-raw '{
331
+ "max_new_tokens": 200,
332
+ "do_sample": true,
333
+ "top_k": 0,
334
+ "top_p": 0.8,
335
+ "temperature": 1.0,
336
+ "repetition_penalty": 1.0,
337
+ "messages": [
338
+ {
339
+ "role": "user",
340
+ "content": "你叫什么名字"
341
+ }
342
+ ]
343
+ }'
344
+ ```
345
+
346
+ <br>
347
+
348
+ # 模型推理
349
+ ## 模型部署
350
+ ### vLLM安装环境
351
+ 如希望部署及加速推理,我们建议你使用 `vLLM==0.3.3`。
352
+
353
+ 如果你使用**CUDA 12.1和PyTorch 2.1**,可以直接使用以下命令安装vLLM。
354
+ ```shell
355
+ pip install vllm==0.3.3
356
+ ```
357
+
358
+ 否则请参考vLLM官方的[安装说明](https://docs.vllm.ai/en/latest/getting_started/installation.html)。
359
+
360
+ >安装完成后,还需要以下操作~
361
+ 1. 把vllm/zhinao.py文件复制到env环境对应的vllm/model_executor/models目录下。
362
+ 2. 把vllm/serving_chat.py文件复制到env环境对应的vllm/entrypoints/openai目录下。
363
+ 3. 然后在vllm/model_executor/models/\_\_init\_\_.py文件增加一行代码
364
+
365
+ ```shell
366
+ "ZhinaoForCausalLM": ("zhinao", "ZhinaoForCausalLM"),
367
+ ```
368
+
369
+ ### vLLM服务启动
370
+
371
+ 启动服务
372
+ ```shell
373
+ python -m vllm.entrypoints.openai.api_server \
374
+ --served-model-name 360Zhinao-7B-Chat-4K \
375
+ --model qihoo360/360Zhinao-7B-Chat-4K \
376
+ --trust-remote-code \
377
+ --tensor-parallel-size 1 \
378
+ --max-model-len 4096 \
379
+ --host 0.0.0.0 \
380
+ --port 8360
381
+ ```
382
+
383
+ 使用curl请求服务
384
+ ```shell
385
+ curl http://localhost:8360/v1/chat/completions \
386
+ -H "Content-Type: application/json" \
387
+ -d '{
388
+ "model": "360Zhinao-7B-Chat-4K",
389
+ "max_tokens": 200,
390
+ "top_k": -1,
391
+ "top_p": 0.8,
392
+ "temperature": 1.0,
393
+ "presence_penalty": 0.0,
394
+ "frequency_penalty": 0.0,
395
+ "messages": [
396
+ {"role": "system", "content": "You are a helpful assistant."},
397
+ {"role": "user", "content": "你好"}
398
+ ],
399
+ "stop": [
400
+ "<eod>",
401
+ "<|im_end|>",
402
+ "<|im_start|>"
403
+ ]
404
+ }'
405
+ ```
406
+ 使用python请求服务
407
+ ```python
408
+ from openai import OpenAI
409
+ # Set OpenAI's API key and API base to use vLLM's API server.
410
+ openai_api_key = "EMPTY"
411
+ openai_api_base = "http://localhost:8000/v1"
412
+
413
+ client = OpenAI(
414
+ api_key=openai_api_key,
415
+ base_url=openai_api_base,
416
+ )
417
+
418
+ chat_response = client.chat.completions.create(
419
+ model="360Zhinao-7B-Chat-4K",
420
+ messages=[
421
+ {"role": "system", "content": "You are a helpful assistant."},
422
+ {"role": "user", "content": "你好"},
423
+ ],
424
+ stop=[
425
+ "<eod>",
426
+ "<|im_end|>",
427
+ "<|im_start|>"
428
+ ],
429
+ presence_penalty=0.0,
430
+ frequency_penalty=0.0
431
+ )
432
+ print("Chat response:", chat_response)
433
+ ```
434
+
435
+ > 注意:如需要开启重复惩罚,建议使用 *presence_penalty* 和 *frequency_penalty* 参数。
436
+
437
+ <br>
438
+
439
+ # 模型微调
440
+ ## 训练数据
441
+
442
+ 我们提供了微调训练样例数据 data/test.json,该样例数据是从 [multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) 采样出 1 万条,并且做了格式转换。
443
+
444
+ 数据格式:
445
+ ```json
446
+ [
447
+ {
448
+ "id": 1,
449
+ "conversations": [
450
+ {
451
+ "from": "system",
452
+ "value": "You are a helpful assistant."
453
+ },
454
+ {
455
+ "from": "user",
456
+ "value": "您好啊"
457
+ },
458
+ {
459
+ "from": "assistant",
460
+ "value": "你好!我今天能为您做些什么?有什么问题或需要帮助吗? 我在这里为您提供服务。"
461
+ }
462
+ ]
463
+ }
464
+ ]
465
+ ```
466
+
467
+ ## 微调训练
468
+ 训练脚本如下:
469
+ ```shell
470
+ set -x
471
+
472
+ HOSTFILE=hostfile
473
+ DS_CONFIG=./finetune/ds_config_zero2.json
474
+
475
+ # PARAMS
476
+ LR=5e-6
477
+ EPOCHS=3
478
+ MAX_LEN=4096
479
+ BATCH_SIZE=4
480
+ NUM_NODES=1
481
+ NUM_GPUS=8
482
+ MASTER_PORT=29500
483
+
484
+ IS_CONCAT=False # 是否数据拼接到最大长度(MAX_LEN)
485
+
486
+ DATA_PATH="./data/training_data_sample.json"
487
+ MODEL_PATH="qihoo360/360Zhinao-7B-Base"
488
+ OUTPUT_DIR="./outputs/"
489
+
490
+ deepspeed --hostfile ${HOSTFILE} \
491
+ --master_port ${MASTER_PORT} \
492
+ --num_nodes ${NUM_NODES} \
493
+ --num_gpus ${NUM_GPUS} \
494
+ finetune.py \
495
+ --report_to "tensorboard" \
496
+ --data_path ${DATA_PATH} \
497
+ --model_name_or_path ${MODEL_PATH} \
498
+ --output_dir ${OUTPUT_DIR} \
499
+ --model_max_length ${MAX_LEN} \
500
+ --num_train_epochs ${EPOCHS} \
501
+ --per_device_train_batch_size ${BATCH_SIZE} \
502
+ --gradient_accumulation_steps 1 \
503
+ --save_strategy steps \
504
+ --save_steps 200 \
505
+ --learning_rate ${LR} \
506
+ --lr_scheduler_type cosine \
507
+ --adam_beta1 0.9 \
508
+ --adam_beta2 0.95 \
509
+ --adam_epsilon 1e-8 \
510
+ --max_grad_norm 1.0 \
511
+ --weight_decay 0.1 \
512
+ --warmup_ratio 0.01 \
513
+ --gradient_checkpointing True \
514
+ --bf16 True \
515
+ --tf32 True \
516
+ --deepspeed ${DS_CONFIG} \
517
+ --is_concat ${IS_CONCAT} \
518
+ --logging_steps 1 \
519
+ --log_on_each_node False
520
+ ```
521
+ ```shell
522
+ bash finetune/ds_finetune.sh
523
+ ```
524
+ - 可通过配置hostfile,实现单机、多机训练。
525
+ - 可通过配置ds_config,实现zero2、zero3。
526
+ - 可通过配置fp16、bf16实现混合精度训练,建议使用bf16,与预训练模型保持一致。
527
+ - 可通过配置is_concat参数,控制训练数据是否拼接,当训练数据量级较大时,可通过拼接提升训练效率。
528
+
529
+ <br>
530
+
531
+ # 许可证
532
+
533
+ 本仓库源码遵循开源许可证Apache 2.0。
534
+
535
+ 360智脑开源模型支持商用,若需将本模型及衍生模型用于商业用途,请通过邮箱([email protected])联系进行申请, 具体许可协议请见[《360智脑开源模型许可证》](./360智脑开源模型许可证.txt)。
README_EN.md ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="left">
2
+ <a href="./README.md">中文</a> | &nbsp English</a>&nbsp
3
+ </p>
4
+ <br>
5
+
6
+ <div align="center">
7
+ <h1>
8
+ 360Zhinao (360智脑)
9
+ </h1>
10
+ </div>
11
+ <div align="center">
12
+ 🤗 <a href="https://huggingface.co/qihoo360">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp
13
+ 🤖 <a href="https://www.modelscope.cn/profile/qihoo360">ModelScope</a>&nbsp&nbsp | &nbsp&nbsp
14
+ 💬 <a href="./assets/WeChat.png">WeChat (微信)</a>&nbsp&nbsp
15
+ </div>
16
+ <br>
17
+ <p align="center">
18
+ Feel free to visit 360Zhinao's official website<a href="https://ai.360.com"> https://ai.360.com</a> for more experience.
19
+ </p>
20
+
21
+ # Models Introduction
22
+ 🎉🎉🎉We open-source the 360Zhinao model series:
23
+ - **360Zhinao-7B-Base**
24
+ - **360Zhinao-7B-Chat-4K**
25
+ - **360Zhinao-7B-Chat-32K**
26
+ - **360Zhinao-7B-Chat-360K**
27
+
28
+
29
+ The characteristics of the 360Zhinao open-source project are:
30
+ - **Base Model:** Leveraging a high-quality corpus of 3.4 trillion Tokens, primarily in Chinese, English and code, we achieved competitive performance on relevant benchmark evaluations of the same scale.
31
+ - **Chat Model:** Powerful chat capabilities and three different sequence lengths of 4k, 32k and 360k. 360k (about 500k Chinese characters) is the longest sequcence length among open-sourced Chinese models until now (Apr. 10, 2024).
32
+
33
+ # News and Updates
34
+ - 2024.04.10 We release **360Zhinao-7B** 1.0 version, include the base model and three chat model with sequence length of 4k, 32k, 360k.
35
+
36
+ # Table of contents
37
+ - [Download URL](#Download-URL)
38
+ - [Model Evaluation](#Model-Evaluation)
39
+ - [Quickstart](#Quickstart)
40
+ - [Model Inference](#Model-Inference)
41
+ - [Model Finetune](#Model-Finetune)
42
+ - [License](#License)
43
+
44
+
45
+ # Download URL
46
+ See the following table for this release and download links:
47
+ | Size | Model | BF16 | Int4|
48
+ |-|-|-|-|
49
+ | 7B | 360Zhinao-7B-Base | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Base/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Base">🤗</a> | |
50
+ | 7B | 360Zhinao-7B-Chat-4K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-4K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-4K-Int4">🤗</a> |
51
+ | 7B | 360Zhinao-7B-Chat-32K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-32K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-32K-Int4">🤗</a> |
52
+ | 7B | 360Zhinao-7B-Chat-360K | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K">🤗</a> | <a href="https://www.modelscope.cn/models/qihoo360/360Zhinao-7B-Chat-360K-Int4/summary">🤖</a> <a href="https://huggingface.co/qihoo360/360Zhinao-7B-Chat-360K-Int4">🤗</a> |
53
+
54
+ # Model Evaluation
55
+ ## Base Model
56
+ We validate the performance of our model on the mainstream OpenCompass evaluation datasets, including C-Eval, AGIEval, MMLU, CMMLU, HellaSwag, MATH, GSM8K, HumanEval, MBPP, BBH, LAMBADA. The competencies examined include natural language understanding, knowledge, mathematical computation and reasoning, code generation, logical reasoning, etc.
57
+
58
+ | Model | AVG | C-Eval | AGIEval | MMLU | CMMLU | HellaSwag | MATH | GSM8K | HumanEval | MBPP | BBH | LAMBADA |
59
+ | ----------------- | ----- | ----- | ----- | ----- | ----- | ----- | ------ | ---- | ------ | ---- | ----- | ----- |
60
+ | Baichuan2-7B | 41.49 | 56.3 | 34.6 | 54.7 | 57 | 67 | 5.4 | 24.6 | 17.7 | 24 | 41.8 | 73.3 |
61
+ | Baichuan-7B | 31.94 | 44.7 | 24.6 | 41.5 | 44.6 | 68.4 | 2.5 | 9.6 | 9.1 | 6.4 | 32.8 | 67.1 |
62
+ | ChatGLM3-6B | 58.67 | 67 | 47.4 | 62.8 | 66.5 | 76.5 | 19.2 | 61 | 44.5 | 57.2 | 66.2 | 77.1 |
63
+ | DeepSeek-7B | 39.8 | 45 | 24 | 49.3 | 46.8 | 73.4 | 4.2 | 18.3 | 25 | 36.4 | 42.8 | 72.6 |
64
+ | InternLM2-7B | 58.01 | 65.7 | 50.2 | 65.5 | 66.2 | 79.6 | 19.9 | 70.6 | 41.5 | 42.4 | 64.4 | 72.1 |
65
+ | InternLM-7B | 39.33 | 53.4 | 36.9 | 51 | 51.8 | 70.6 | 6.3 | 31.2 | 13.4 | 14 | 37 | 67 |
66
+ | LLaMA-2-7B | 33.27 | 32.5 | 21.8 | 46.8 | 31.8 | 74 | 3.3 | 16.7 | 12.8 | 14.8 | 38.2 | 73.3 |
67
+ | LLaMA-7B | 30.35 | 27.3 | 20.6 | 35.6 | 26.8 | 74.3 | 2.9 | 10 | 12.8 | 16.8 | 33.5 | 73.3 |
68
+ | Mistral-7B-v0.1 | 47.67 | 47.4 | 32.8 | 64.1 | 44.7 | 78.9 | 11.3 | 47.5 | 27.4 | 38.6 | 56.7 | 75 |
69
+ | MPT-7B | 30.06 | 23.5 | 21.3 | 27.5 | 25.9 | 75 | 2.9 | 9.1 | 17.1 | 22.8 | 35.6 | 70 |
70
+ | Qwen1.5-7B | 55.12 | 73.57 | 50.8 | 62.15 | 71.84 | 72.62 | 20.36 | 54.36 | 53.05 | 36.8 | 40.01 | 70.74 |
71
+ | Qwen-7B | 49.53 | 63.4 | 45.3 | 59.7 | 62.5 | 75 | 13.3 | 54.1 | 27.4 | 31.4 | 45.2 | 67.5 |
72
+ | XVERSE-7B | 34.27 | 61.1 | 39 | 58.4 | 60.8 | 73.7 | 2.2 | 11.7 | 4.9 | 10.2 | 31 | 24 |
73
+ | Yi-6B | 47.8 | 73 | 44.3 | 64 | 73.5 | 73.1 | 6.3 | 39.9 | 15.2 | 23.6 | 44.9 | 68 |
74
+ | 360Zhinao-7B | 56.15 | 74.11 | 49.49 | 67.44 | 72.38 | 83.05 | 16.38 | 53.83 | 35.98 | 42.4 | 43.95 | 78.59 |
75
+
76
+ The above results could be viewed or reproduced on [Opencompass](https://rank.opencompass.org.cn/leaderboard-llm).
77
+
78
+ ## Chat Models
79
+
80
+ We evaluated our models across various lengths and benchmarks.
81
+
82
+ - ### Alignment Benchmarks
83
+ In order to verify the multi-round dialogue effect of the model, we used the MT-Bench dataset here. [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) is a benchmark composed of 80 high-quality multi-round dialogue questions, aimed at testing the ability of multi-round dialogue and instruction following. In the construction of the dataset, 8 common user prompt categories were determined: writing, role-playing, extraction, reasoning, mathematics, coding, knowledge I (STEM), and knowledge II (humanities/social sciences). For each category, 10 multi-round questions were manually designed, each round has 2 questions, and the question scoring uses the GPT4 automatic scoring mechanism.
84
+ | Model | turn1 | turn2 | average |
85
+ | -------------------- | --------- | -------- | --------- |
86
+ | Qwen7b-chat | 6.5725 | 5.4000 | 5.9862 |
87
+ | Baichuan2-7B-Chat | 6.4562 | 5.5562 | 6.0062 |
88
+ | InternLM-7B-Chat | 5.5625 | 4.0696 | 4.8207 |
89
+ | Llama2-7B-Chat | 0 | 0 | 0 |
90
+ | 360Zhinao-7B-Chat | 6.5062 | 5.8762 | 6.1962 |
91
+
92
+
93
+ - ### Long Context Benchmarks
94
+
95
+ We evaluated our 32k and 360k models on [LongBench](https://github.com/THUDM/LongBench), a multi-task bilingual benchmark for long contexts. We report results on Chinese tasks that are the most relevant to downstream applications: Single/Multi-Doc QA, Summarization, Few-Shot Learning and Code Completion.
96
+
97
+ | Model | Avg | 单文档QA | 多文档QA | 摘要 | Few-shot学习 | 代码补全 |
98
+ | -------------------- | --------- | -------- | --------- | --------- | ------------ | --------- |
99
+ | GPT-3.5-Turbo-16k | 37.84 | 61.2 | 28.7 | 16 | 29.2 | 54.1 |
100
+ | ChatGLM2-6B-32k | 37.16 | 51.6 | 37.6 | 16.2 | 27.7 | 52.7 |
101
+ | ChatGLM3-6B-32k | 44.62 | **62.3** | 44.8 | 17.8 | 42 | 56.2 |
102
+ | InternLM2-Chat-7B | 42.20 | 56.65 | 29.15 | **17.99** | 43.5 | **63.72** |
103
+ | Qwen1.5-Chat-7B | 36.75 | 52.85 | 30.08 | 14.28 | 32 | 54.55 |
104
+ | 360Zhinao-7B-Chat-32k | **45.18** | 57.18 | **48.06** | 15.03 | **44** | 61.64 |
105
+ | 360Zhinao-7B-Chat-360k | 39.25 | 52.82 | 48.01 | 14.4 | 41.25 | 39.75 |
106
+
107
+ - ### 360Zhinao-7B-Chat-360k on "NeedleInAHaystack"
108
+
109
+ [NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack/blob/main/LLMNeedleHaystackTester.py) places one small piece of information in different positions of a long text and queries this information as a test of LLM's long-context capabilities.
110
+
111
+ 360Zhinao-7B-Chat-360k could achieve over 98% accuracy on both English and Chinese NeedleInAHaystack tasks.
112
+
113
+ - English version(same as [NeedleInAHaystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack/blob/main/LLMNeedleHaystackTester.py))
114
+
115
+ <p align="center">
116
+ <img src="assets/360Zhinao-7B-Chat-360K.en_score.png" width="600" />
117
+ <p>
118
+
119
+ **needle**:The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.
120
+
121
+ **query**:What is the best thing to do in San Francisco?
122
+
123
+
124
+ - Chinese version
125
+
126
+ <p align="center">
127
+ <img src="assets/360Zhinao-7B-Chat-360K.zh_score.png" width="600" />
128
+ <p>
129
+
130
+ We constructed the Chinese version following the [SuperCLUE-200K benchmark](https://mp.weixin.qq.com/s/QgoRf2LB-7vc3vTFOHJkpw):
131
+
132
+ **haystack**:Chinese novels.
133
+
134
+ **needle**:(in Chinese) 王莽是一名勤奋的店员,他每天凌晨就起床,赶在第一缕阳光照亮大地之前到达店铺,为即将开始的一天做准备。他清扫店铺,整理货架,为顾客提供方便。他对五金的种类和用途了如指掌,无论顾客需要什么,他总能准确地找到。\n然而,他的老板刘秀却总是对他吹毛求疵。刘秀是个挑剔的人,他总能在王莽的工作中找出一些小错误,然后以此为由扣他的工资。他对王莽的工作要求非常严格,甚至有些过分。即使王莽做得再好,刘秀也总能找出一些小问题,让王莽感到非常沮丧。\n王莽虽然对此感到不满��但他并没有放弃。他知道,只有通过自己的努力,才能获得更好的生活。他坚持每天早起,尽管他知道那天可能会再次被刘秀扣工资。他始终保持微笑,尽管他知道刘秀可能会再次对他挑剔。
135
+
136
+ **query**:(in Chinese) 王莽在谁的手下工作?
137
+
138
+
139
+ # Quickstart
140
+ Simple examples to illustrate how to use 360Zhinao-7B-Base and 360Zhinao-7B-Chat quickly using 🤖 ModelScope and 🤗 Transformers
141
+
142
+ ## Dependency Installation
143
+ - python 3.8 and above
144
+ - pytorch 2.0 and above
145
+ - transformers 4.37.2 and above
146
+ - CUDA 11.4 and above are recommended.
147
+
148
+ ```shell
149
+ pip install -r requirements.txt
150
+ ```
151
+ We recommend installing Flash-Attention (which currently supports flash attention 2) to increase your performance and reduce your memory footprint. (flash-attention is optional and will work without installation)
152
+
153
+ >flash-attn >= 2.3.6
154
+ ```shell
155
+ FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn==2.3.6
156
+ ```
157
+
158
+ ## 🤗 Transformers
159
+ ### Demonstration of Base Model Inference
160
+
161
+ This code demonstrates fast inference with 360Zhinao-7B-Base models using transformers.
162
+ ```python
163
+ from transformers import AutoTokenizer, AutoModelForCausalLM
164
+ from transformers.generation import GenerationConfig
165
+
166
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
167
+
168
+ tokenizer = AutoTokenizer.from_pretrained(
169
+ MODEL_NAME_OR_PATH,
170
+ trust_remote_code=True)
171
+
172
+ model = AutoModelForCausalLM.from_pretrained(
173
+ MODEL_NAME_OR_PATH,
174
+ device_map="auto",
175
+ trust_remote_code=True)
176
+
177
+ generation_config = GenerationConfig.from_pretrained(
178
+ MODEL_NAME_OR_PATH,
179
+ trust_remote_code=True)
180
+
181
+ inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
182
+ inputs = inputs.to(model.device)
183
+
184
+ pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
185
+ print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
186
+ ```
187
+ ### Demonstration of Chat Model Inference
188
+
189
+ This code demo uses transformers to quickly use the 360Zhinao-7B-Chat-4K model for inference.
190
+ ```python
191
+ from transformers import AutoTokenizer, AutoModelForCausalLM
192
+ from transformers.generation import GenerationConfig
193
+
194
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
195
+
196
+ tokenizer = AutoTokenizer.from_pretrained(
197
+ MODEL_NAME_OR_PATH,
198
+ trust_remote_code=True)
199
+
200
+ model = AutoModelForCausalLM.from_pretrained(
201
+ MODEL_NAME_OR_PATH,
202
+ device_map="auto",
203
+ trust_remote_code=True)
204
+
205
+ generation_config = GenerationConfig.from_pretrained(
206
+ MODEL_NAME_OR_PATH,
207
+ trust_remote_code=True)
208
+
209
+ messages = []
210
+ #round-1
211
+ messages.append({"role": "user", "content": "介绍一下刘德华"})
212
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
213
+ messages.append({"role": "assistant", "content": response})
214
+ print(messages)
215
+
216
+ #round-2
217
+ messages.append({"role": "user", "content": "他有什么代表作?"})
218
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
219
+ messages.append({"role": "assistant", "content": response})
220
+ print(messages)
221
+ ```
222
+
223
+ ## 🤖 ModelScope
224
+ ### Demonstration of Base Model Inference
225
+
226
+ This code demonstrates using ModelScope to quickly use the 360Zhinao-7B-Base model for inference.
227
+
228
+ ```python
229
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
230
+ from modelscope import GenerationConfig
231
+
232
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Base"
233
+
234
+ tokenizer = AutoTokenizer.from_pretrained(
235
+ MODEL_NAME_OR_PATH,
236
+ trust_remote_code=True)
237
+
238
+ model = AutoModelForCausalLM.from_pretrained(
239
+ MODEL_NAME_OR_PATH,
240
+ device_map="auto",
241
+ trust_remote_code=True)
242
+
243
+ generation_config = GenerationConfig.from_pretrained(
244
+ MODEL_NAME_OR_PATH,
245
+ trust_remote_code=True)
246
+
247
+ inputs = tokenizer('中国二十四节气\n1. 立春\n2. 雨水\n3. 惊蛰\n4. 春分\n5. 清明\n', return_tensors='pt')
248
+ inputs = inputs.to(model.device)
249
+
250
+ pred = model.generate(input_ids=inputs["input_ids"], generation_config=generation_config)
251
+ print("outputs:\n", tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
252
+ ```
253
+
254
+ ### Demonstration of Base Model Inference
255
+
256
+ This code demonstrates using ModelScope to quickly use the 360Zhinao-7B-Chat-4K model for inference.
257
+
258
+ ```python
259
+ from modelscope import AutoModelForCausalLM, AutoTokenizer
260
+ from modelscope import GenerationConfig
261
+
262
+ MODEL_NAME_OR_PATH = "qihoo360/360Zhinao-7B-Chat-4K"
263
+
264
+ tokenizer = AutoTokenizer.from_pretrained(
265
+ MODEL_NAME_OR_PATH,
266
+ trust_remote_code=True)
267
+
268
+ model = AutoModelForCausalLM.from_pretrained(
269
+ MODEL_NAME_OR_PATH,
270
+ device_map="auto",
271
+ trust_remote_code=True)
272
+
273
+ generation_config = GenerationConfig.from_pretrained(
274
+ MODEL_NAME_OR_PATH,
275
+ trust_remote_code=True)
276
+
277
+ messages = []
278
+ #round-1
279
+ messages.append({"role": "user", "content": "介绍一下刘德华"})
280
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
281
+ messages.append({"role": "assistant", "content": response})
282
+ print(messages)
283
+
284
+ #round-2
285
+ messages.append({"role": "user", "content": "他有什么代表作?"})
286
+ response = model.chat(tokenizer=tokenizer, messages=messages, generation_config=generation_config)
287
+ messages.append({"role": "assistant", "content": response})
288
+ print(messages)
289
+ ```
290
+
291
+ ## CLI Demo
292
+ Use terminal interaction for a fast experience
293
+ ```shell
294
+ python cli_demo.py
295
+ ```
296
+ <p align="center">
297
+ <img src="assets/cli_demo.gif" width="600" />
298
+ <p>
299
+
300
+ ## Web Demo
301
+ You can also use web interaction for a quick experience
302
+ ```shell
303
+ streamlit run web_demo.py
304
+ ```
305
+ <p align="center">
306
+ <img src="assets/web_demo.gif" width="600" />
307
+ <p>
308
+
309
+ ## API Demo
310
+ Start command
311
+ ```shell
312
+ python openai_api.py
313
+ ```
314
+
315
+ Request parameter
316
+ ```shell
317
+ curl --location --request POST 'http://localhost:8360/v1/chat/completions' \
318
+ --header 'Content-Type: application/json' \
319
+ --data-raw '{
320
+ "max_new_tokens": 200,
321
+ "do_sample": true,
322
+ "top_k": 0,
323
+ "top_p": 0.8,
324
+ "temperature": 1.0,
325
+ "repetition_penalty": 1.0,
326
+ "messages": [
327
+ {
328
+ "role": "user",
329
+ "content": "你叫什么名字?"
330
+ }
331
+ ]
332
+ }'
333
+ ```
334
+
335
+ # Model Inference
336
+ ## Quantization
337
+ We provide quantization schemes based on AutoGPTQ and open source the Int4 quantization models. The quantization model has little effect loss, but it can significantly reduce the video memory occupation and improve the inference speed.
338
+
339
+ The BF16, Int8, and Int4 models are tested on the benchmarks, and the results are as follows:
340
+
341
+ | Quantization | MMLU | CEval (val) | GSM8K | Humaneval |
342
+ |-|-|-|-|-|
343
+ | 360Zhinao-7B-Chat-4K (BF16) |-|-|-|-|
344
+ | 360Zhinao-7B-Chat-4K (Int8) |-|-|-|-|
345
+ | 360Zhinao-7B-Chat-4K (Int4) |-|-|-|-|
346
+
347
+ ## Deployment
348
+ ### vLLM Installation
349
+ If you want to deploy and accelerate inference, we recommend using `vLLM==0.3.3`。
350
+
351
+ If you are using **CUDA 12.1 and PyTorch 2.1**, you can install vLLM directly with the following command.
352
+ ```shell
353
+ pip install vllm==0.3.3
354
+ ```
355
+
356
+ Otherwise, please refer to the official vLLM [Installation Instructions](https://docs.vllm.ai/en/latest/getting_started/installation.html)。
357
+
358
+ >Once the installation is complete, you will need to do the following
359
+ 1. Copy the vllm/zhinao.py file to the vllm/model_executor/models directory corresponding to your env environment.
360
+ 2. Copy the vllm/serving_chat.py file to the vllm/entrypoints/openai corresponding to your env environment.
361
+ 3. Then add a line to vllm/model_executor/models/\_\_init\_\_.py
362
+
363
+ ```shell
364
+ "ZhinaoForCausalLM": ("zhinao", "ZhinaoForCausalLM"),
365
+ ```
366
+
367
+ ### vLLM Service Start
368
+
369
+ Starting the service
370
+ ```shell
371
+ python -m vllm.entrypoints.openai.api_server \
372
+ --served-model-name 360Zhinao-7B-Chat-4K \
373
+ --model qihoo360/360Zhinao-7B-Chat-4k \
374
+ --trust-remote-code \
375
+ --tensor-parallel-size 1 \
376
+ --max-model-len 4096 \
377
+ --host 0.0.0.0 \
378
+ --port 8360
379
+ ```
380
+
381
+ Use curl to request the service
382
+ ```shell
383
+ curl http://localhost:8360/v1/chat/completions \
384
+ -H "Content-Type: application/json" \
385
+ -d '{
386
+ "model": "360Zhinao-7B-Chat-4K",
387
+ "max_tokens": 200,
388
+ "top_k": -1,
389
+ "top_p": 0.8,
390
+ "temperature": 1.0,
391
+ "presence_penalty": 0.0,
392
+ "frequency_penalty": 0.0,
393
+ "messages": [
394
+ {"role": "system", "content": "You are a helpful assistant."},
395
+ {"role": "user", "content": "你好"}
396
+ ],
397
+ "stop": [
398
+ "<eod>",
399
+ "<|im_end|>",
400
+ "<|im_start|>"
401
+ ]
402
+ }'
403
+ ```
404
+ Use python to request the service
405
+ ```python
406
+ from openai import OpenAI
407
+ openai_api_key = "EMPTY"
408
+ openai_api_base = "http://localhost:8000/v1"
409
+
410
+ client = OpenAI(
411
+ api_key=openai_api_key,
412
+ base_url=openai_api_base,
413
+ )
414
+
415
+ chat_response = client.chat.completions.create(
416
+ model="360Zhinao-7B-Chat-4K",
417
+ messages=[
418
+ {"role": "system", "content": "You are a helpful assistant."},
419
+ {"role": "user", "content": "你好"},
420
+ ],
421
+ stop=[
422
+ "<eod>",
423
+ "<|im_end|>",
424
+ "<|im_start|>"
425
+ ],
426
+ presence_penalty=0.0,
427
+ frequency_penalty=0.0
428
+ )
429
+ print("Chat response:", chat_response)
430
+ ```
431
+
432
+ > Notice: If you need to enable repetition penalty, recommended to use *presence_penalty* and *frequency_penalty* parameters.
433
+
434
+ >
435
+
436
+ # Model Finetune
437
+ ## Training data
438
+
439
+ Training Data: data/training_data_sample.json. The sample data is 10,000 pieces sampled from [multiturn_chat_0.8M](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) and format converted.
440
+
441
+ Data Format:
442
+ ```json
443
+ [
444
+ {
445
+ "id": 1,
446
+ "conversations": [
447
+ {
448
+ "from": "system",
449
+ "value": "You are a helpful assistant."
450
+ },
451
+ {
452
+ "from": "user",
453
+ "value": "您好啊"
454
+ },
455
+ {
456
+ "from": "assistant",
457
+ "value": "你好!我今天能为您做些什么?有什么问题或需要帮助吗? 我在这里为您提供服务。"
458
+ }
459
+ ]
460
+ }
461
+ ]
462
+ ```
463
+ ## Fine-tuning scripts
464
+ ```shell
465
+ set -x
466
+
467
+ HOSTFILE=hostfile
468
+ DS_CONFIG=./finetune/ds_config_zero2.json
469
+
470
+ # PARAMS
471
+ LR=5e-6
472
+ EPOCHS=3
473
+ MAX_LEN=4096
474
+ BATCH_SIZE=4
475
+ NUM_NODES=1
476
+ NUM_GPUS=8
477
+ MASTER_PORT=29500
478
+
479
+ IS_CONCAT=False # Whether to concatenate to maximum length (MAX_LEN)
480
+
481
+ DATA_PATH="./data/training_data_sample.json"
482
+ MODEL_PATH="qihoo360/360Zhinao-7B-Base"
483
+ OUTPUT_DIR="./outputs/"
484
+
485
+ deepspeed --hostfile ${HOSTFILE} \
486
+ --master_port ${MASTER_PORT} \
487
+ --num_nodes ${NUM_NODES} \
488
+ --num_gpus ${NUM_GPUS} \
489
+ finetune.py \
490
+ --report_to "tensorboard" \
491
+ --data_path ${DATA_PATH} \
492
+ --model_name_or_path ${MODEL_PATH} \
493
+ --output_dir ${OUTPUT_DIR} \
494
+ --model_max_length ${MAX_LEN} \
495
+ --num_train_epochs ${EPOCHS} \
496
+ --per_device_train_batch_size ${BATCH_SIZE} \
497
+ --gradient_accumulation_steps 1 \
498
+ --save_strategy steps \
499
+ --save_steps 200 \
500
+ --learning_rate ${LR} \
501
+ --lr_scheduler_type cosine \
502
+ --adam_beta1 0.9 \
503
+ --adam_beta2 0.95 \
504
+ --adam_epsilon 1e-8 \
505
+ --max_grad_norm 1.0 \
506
+ --weight_decay 0.1 \
507
+ --warmup_ratio 0.01 \
508
+ --gradient_checkpointing True \
509
+ --bf16 True \
510
+ --tf32 True \
511
+ --deepspeed ${DS_CONFIG} \
512
+ --is_concat ${IS_CONCAT} \
513
+ --logging_steps 1 \
514
+ --log_on_each_node False
515
+ ```
516
+ ```shell
517
+ bash finetune/ds_finetune.sh
518
+ ```
519
+ - By configuring the **hostfile**, single-machine and multi-machine training can be realized.
520
+ - By configuring **ds_config**, realize zero2 and zero3 training
521
+ - By configuring the **fp16**、**bf16** realize mixed precision training, bf16 is recommended to be consistent with the pre-trained model.
522
+ - By configuring **is_concat**, Whether the training data is concatenated or not is controlled. When the magnitude of the training data is large, the training efficiency can be improved by concatenation.
523
+
524
+ # License
525
+
526
+ The source code of this warehouse follows the open source license Apache 2.0.
527
+
528
+ The 360 ​Zhinao open source model supports commercial use. If you need to use this model and its derivative models for commercial purposes, please contact us via email ([email protected]) to apply. For the specific license agreement, please see [《360 Zhinao Open Source Model License》](./360智脑开源模型许可证.txt).