File size: 16,847 Bytes
c0c250e
 
c40be13
 
 
 
2e15621
e4adf5e
 
0d5fbb8
 
61e5081
e4adf5e
 
d9b220b
e20e6a2
 
 
 
 
 
 
d9b220b
c40be13
 
 
e20e6a2
 
 
 
 
 
 
 
 
 
 
5e89f20
dde736b
 
 
e3e4276
5e89f20
e3e4276
5e89f20
 
e3e4276
258a776
5e89f20
258a776
5e89f20
 
 
e3e4276
5e89f20
e3e4276
5e89f20
 
e3e4276
258a776
5e89f20
258a776
5e89f20
 
 
dde736b
5e89f20
dde736b
5e89f20
 
dde736b
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
258a776
 
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
17e2a59
5e89f20
 
17e2a59
5e89f20
 
4beacbb
5e89f20
4beacbb
5e89f20
 
4beacbb
5e89f20
 
 
c0c250e
c40be13
 
 
3931f9f
887ace8
18091ea
887ace8
b5333d3
0c6d30c
1de5c8a
e0495ae
1de5c8a
 
 
 
 
b5333d3
fd74aa8
9bdc0da
fd74aa8
 
 
 
 
b5333d3
 
 
 
 
 
 
 
 
 
 
 
 
887ace8
 
913e970
 
 
 
 
 
 
 
2e15621
 
913e970
3c4b9d4
913e970
c1ff296
3c4b9d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
913e970
0cf2c90
 
b279ee7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
913e970
 
5e43c0c
913e970
c40be13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6be81f3
c40be13
6be81f3
f2ff0be
 
a24e68d
 
c40be13
 
6be81f3
c40be13
 
 
 
 
a24e68d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be3ce76
 
 
 
 
 
 
 
 
 
 
 
 
 
b3cc9d6
887ace8
 
 
c40be13
 
 
 
 
 
 
 
 
 
 
 
 
b5333d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
widget:
- example_title: Explain the name of CrystalChat
  text: >-
    <s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed
    and long answer. <|sys_end|> <|im_start|> Give me a one-sentence description of crystal. And tell me what
    might the name "Crystal" mean for a transparant and open-source language model? <|im_end|>
  output:
    text: >-
     A crystal is an ordered, hexagonal lattice of atoms that displays a unique crystal structure resulting in transparency and diverse physical and chemical properties, and as a transparent and open-source language model, "Crystal" might represent transparency and accessibility by enabling users to understand the inner workings of the model.</s>
- example_title: Write python function and explain
  text: >-
    <s> <|sys_start|> You are an AI assistant. You will be given a task. You
    must generate a detailed and long answer. <|sys_end|> <|im_start|> Write a
    python function that takes a list of integers and returns the squared sum of
    the list. <|im_end|>
  output:
    text: "\n\nHere's a Python function named `squared_sum_list` that takes a list of integers as input and returns the squared sum of the list:\n\n```python\ndef squared_sum_list(lst):\n\treturn sum([num ** 2 for num in lst])\n```\nThe function `squared_sum_list` uses a list comprehension to iterate over each number in the input list `lst` and calculate its square. Then, it uses the `sum` function to accumulate all the squared numbers in a single value - the squared sum of the list.\n\nFor example:\n\n```python\nnumbers = [1, 2, 3, 4, 5]\nprint(squared_sum_list(numbers))  # Outputs: 55\n```\n\nIn the above code, the list `[1, 2, 3, 4, 5]` is passed as an argument to the `squared_sum_list` function. The function calculates the sum of the squares of the elements in the list, which is `1 + 4 + 9 + 16 + 25 = 55`. The function then returns this result, which is printed to the console.</s>"
tags:
- llm
- code
datasets:
- openaccess-ai-collective/oasst1-guanaco-extended-sharegpt
- Open-Orca/SlimOrca
- AtAndDev/ShareGPT-Vicuna-v3-cleaned-unfiltered
- WizardLM/WizardLM_evol_instruct_V2_196k
- winglian/chatlogs-en-cleaned
- HuggingFaceH4/CodeAlpaca_20K
- theblackcat102/evol-codealpaca-v1
- nickrosh/Evol-Instruct-Code-80k-v1
- open-phi/textbooks
- open-phi/programming_books_llama
- LLM360/CrystalCoderDatasets
model-index:
- name: CrystalChat
  results:
  - task:
      type: text-generation
    dataset:
      type: openai_humanneval
      name: OpenAI HumanEval
    metrics:
    - name: pass@1 (t=0.2)
      type: pass@1
      value: 34.116
    - name: pass@10 (t=0.8)
      type: pass@10
      value: 65.755
  - task:
      type: text-generation
    dataset:
      type: mbpp
      name: Mostly Basic Python Problems (mbpp)
    metrics:
    - name: pass@1 (t=0.1)
      type: pass@1
      value: 39.112
    - name: pass@10 (t=0.8)
      type: pass@10
      value: 59.895
  - task:
      type: multiple-choice
    dataset:
      type: race
      name: RACE
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 41.148
  - task:
      type: multiple-choice
    dataset:
      type: mmlu
      name: Measuring Massive Multitask Language Understanding (MMLU)
    metrics:
    - name: Accuracy (5 shot)
      type: accuracy
      value: 53.215
    - name: Accuracy (0 shot)
      type: accuracy
      value: 52.789
  - task:
      type: multiple-choice
    dataset:
      type: truthful_qa
      name: Truthful QA
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 47.29
  - task:
      type: multiple-choice
    dataset:
      type: winogrande
      name: Winogrande
    metrics:
    - name: Accuracy (5 shot)
      type: accuracy
      value: 70.639
    - name: Accuracy (0 shot)
      type: accuracy
      value: 68.114
  - task:
      type: multiple-choice
    dataset:
      type: copa
      name: COPA
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 85
  - task:
      type: text-classification
    dataset:
      type: boolq
      name: Boolq
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 82.783
  - task:
      type: question-answering
    dataset:
      type: openbookqa
      name: Openbook QA
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 42
  - task:
      type: multiple-choice
    dataset:
      type: hellaSwag
      name: HellaSwag
    metrics:
    - name: Accuracy (10-shot)
      type: accuracy
      value: 76.12
    - name: Accuracy (0-shot)
      type: accuracy
      value: 73.312
  - task:
      type: question-answering
    dataset:
      type: piqa
      name: PIQA
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 77.856
  - task:
      type: question-answering
    dataset:
      type: ai2_arc
      name: ARC (Easy)
    metrics:
    - name: Accuracy (0 shot)
      type: accuracy
      value: 70.328
  - task:
      type: question-answering
    dataset:
      type: ai2_arc
      name: ARC (Challenge)
    metrics:
    - name: Accuracy (25-shot)
      type: accuracy
      value: 51.706
    - name: Accuracy (0-shot)
      type: accuracy
      value: 44.625
  - task:
      type: text-generation
    dataset:
      type: gsm8k
      name: GSM8K (Grade School Math 8K)
    metrics:
    - name: Accuracy (5 shot)
      type: accuracy
      value: 28.052
---

# CrystalChat

We present CrystalChat, an instruction following model finetuned from [LLM360/Crystal](https://huggingface.co/LLM360/CrystalCoder). 

CrystalChat pushes the Llama 2 frontier for models excelling at both langauge and coding tasks.  CrystalChat is part of LLM360's Pebble model series.

# CrystalChat Performance

|           Model          | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg. 
|------------------------|--------------|------------|-------------|-----------|
| CrystalChat 7B           | 1.275T         | 44.96        | 53.29         | 36.62       |
| Mistral-7B-Instruct-v0.1 | -              | 44.34        | 54.86         | 30.62       |
| CodeLlama-7b-Instruct    | 2.5T           | 40.91        | 45.29         | 36.52       |
| Llama-2-7b-Chat          | 2T             | 34.11        | 52.86         | 15.35       |
| AmberChat 7B             | 1.25T          |     -        | 44.76         |     -       |

|           Model          | Trained Tokens |  ARC  | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
|------------------------|--------------|------------|-------------|-----------|-----|---------|-------------|-----|------------------|
| CrystalChat 7B           | 1.275T         | 51.71 | 76.12     | 53.22         | 28.05 | 70.64              | 47.29      | 34.12              | 39.11         |
| Mistral-7B-Instruct-v0.1 | -              | 58.05 | 75.71     | 55.56         | 32.00 | 74.27              | 55.90      | 29.27              | 31.96         |
| CodeLlama-7b-Instruct    | 2.5T           | 43.35 | 66.14     | 42.75         | 15.92 | 64.33              | 39.23      | 34.12              | 38.91         |
| Llama-2-7b-Chat          | 2T             | 53.07 | 78.39     | 48.42         | 18.88 | 73.09              | 45.30      | 13.26              | 17.43         |
| AmberChat 7B             | 1.25T          | 42.83 | 74.03     | 38.88         | 5.31  | 66.77              | 40.72      |     -              |       -       |


| Combined Language and Coding Ability           |
|------------------------------------------------|
<img src="CC-Compare.jpg" alt="arc" width="800"/>

| Performance on Standard Benchmarks             |
|------------------------------------------------|
<img src="cc-eval-std-benchmarks.png" alt="std-bench" width="800"/>

| Perforamnce on Language Benchmarks                      |
|---------------------------------------------------------|
<img src="cc-eval-lang-compare.png" alt="arc" width="800"/>


# Instruction Tuning Training

**CrystalChat** is using the last **CrystalCoder** checkpoint of phase2 ([CrystalCoder_phase2_checkpoint_214387](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase2_checkpoint_214387)) as the initialization checkpoint. We then finetune the model using the dataset mentioned below.

We also performed the same finetuning on the last **CrystalCoder** checkpoint of phase3 ([CrystalCoder_phase3_checkpoint_027728](https://huggingface.co/LLM360/CrystalCoder/tree/CrystalCoder_phase3_checkpoint_027728)). The phase2 and phase3 finetuning results are very similar, but phase2 finetuning exhibits slightly better performance on the English language benchmarks. We choose the phase2 finetuning result as the final model for **CrystalChat**.

# Instruction Tuning Data 

The fine-tuning data is a mix of publicly available language and code datasets, plus a orginally created dataset called **WebAlpaca** on HTML coding instructions.
The WebAlpaca dataset is created by us and is used as part of our instruction tuning training data. We will release the WebAlpaca dataset in a separate repository soon.

The summary of the fine-tuning data is as follows:

<!-- <center><img src="data_table.jpg" alt="Instruction Data"/></center> -->
| Subset      | #Tokens | Avg. #Q | Avg. Query Len | Avg. #R | Avg. Reply Len |
| ----------- | ----------- |----------- |----------- |----------- |----------- |
| [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt)      | 4,464,640       | 1.36 | 38.28 | 1.36 | 271.69 |
| [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)   |225,628,160        | 1.00 | 259.16	| 1.00	| 151.12 |
| [ShareGPT](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered)   | 112,914,432        | 3.28 | 94.53	| 3.64	| 365.81 | 
| [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)   | 85,954,560        | 1.00	| 145.99 |	1.00	| 425.17 | 
| [ChatLogs](https://huggingface.co/datasets/winglian/chatlogs-en-cleaned)   | 29,337,600        | 3.39	| 95.58	| 3.24	| 191.42 |
| [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k)   | 2,623,488        | 1.00	| 32.46	| 1.00	| 67.68 |
| [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json)   | 7,987,200        |  1.00 |	450.09	| 1.00	| 533.52 |
| [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)   | 73,803,776        | 1.00	| 210.33 | 	1.00 | 	437.92 | 
| [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1)   | 34,910,208        | 1.00	| 114.99 |	1.00 |	300.29 |
| WebAlpaca  | 43,673,600        | 1.00 |	96.29 |	1.00	| 746.52 | 
| [General Textbooks](https://huggingface.co/datasets/open-phi/textbooks)   | 85,590,016        | Not instruction data
| [Programming Books](https://huggingface.co/datasets/open-phi/programming_books_llama)   | 395,628,544        | Not instruction data
| Total | 1,102,516,224

For more details, check out the [data table](https://huggingface.co/LLM360/CrystalChat/blob/main/data_table.jpg).

# Instruction Format

We've added some new special tokens to the CrystalCoder tokenizer to support the instruction tuning.

List special tokens used in the instruction tuning:

```
bos: <s> 
eos: </s>
system_start: <|sys_start|>
system_end: <|sys_end|>
user_start: <|im_start|>
user_end: <|im_end|>
```

The instruction format is as follows:

```
<s> <|sys_start|> system prompt <|sys_end|> <|im_start|> first user utterance <|im_end|> first model response <|im_start|> next user utterance <|im_end|> next model response </s>
```

# Reproducing the Results

We will release the training code and the training data soon. Our training code is based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), with some modifications to support our training data format and Maximal Update Parametrization (μP).

## Model Description

- **Model type:** Language model with the same architecture as LLaMA-7B
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Resources for more information:**
  - [Training Code](https://github.com/LLM360/crystalcoder-train)
  - [Data Preparation](https://github.com/LLM360/crystalcoder-data-prep)
  - [Metrics](https://github.com/LLM360/Analysis360)
  - [Fully processed CrystalCoder pretraining data](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets)

# Loading CrystalChat 

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda:0" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("LLM360/CrystalChat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("LLM360/CrystalChat", trust_remote_code=True).to(device)

prompt = '<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed and long answer. <|sys_end|> <|im_start|> Write a python function that takes a list of integers and returns the squared sum of the list. <|im_end|>'


input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)

print("-"*20 + "Output for model"  + 20 * '-')
print(tokenizer.batch_decode(gen_tokens)[0])
```

Response:
````
Here's a Python function named `squared_sum_list` that takes a list of integers as input and returns the squared sum of the list:

```python
def squared_sum_list(lst):
    return sum([num ** 2 for num in lst])
```

The function `squared_sum_list` uses a list comprehension to iterate over each number in the input list `lst` and calculate its square. Then, it uses the `sum` function to accumulate all the squared numbers in a single value - the squared sum of the list.

For example:

```python
numbers = [1, 2, 3, 4, 5]
print(squared_sum_list(numbers))  # Outputs: 55
```

In the above code, the list `[1, 2, 3, 4, 5]` is passed as an argument to the `squared_sum_list` function. The function calculates the sum of the squares of the elements in the list, which is `1 + 4 + 9 + 16 + 25 = 55`. The function then returns this result, which is printed to the console.</s>
````

Another chat example:
```python
# ...
prompt = '<s> <|sys_start|> You are an AI assistant. You will be given a task. You must generate a detailed \
and long answer. <|sys_end|> <|im_start|> Give me a one-sentence description of crystal. And tell me what \
might the name "Crystal" mean for a transparant and open-source language model? <|im_end|>'
# ...
```

Response:
```
A crystal is an ordered, hexagonal lattice of atoms that displays a unique crystal structure resulting in transparency and diverse physical and chemical properties, and as a transparent and open-source language model, "Crystal" might represent transparency and accessibility by enabling users to understand the inner workings of the model.</s>
```


# Bias, Risks, and Limitations
CrystalChat has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). The training data is known and made available [here](https://huggingface.co/datasets/LLM360/CrystalCoderDatasets). It primarily consists of SlimPajama, StarCoder, and WebCrawl dataset.

# Citation

**BibTeX:**

```bibtex
@misc{liu2023llm360,
      title={LLM360: Towards Fully Transparent Open-Source LLMs}, 
      author={Zhengzhong Liu and Aurick Qiao and Willie Neiswanger and Hongyi Wang and Bowen Tan and Tianhua Tao and Junbo Li and Yuqi Wang and Suqi Sun and Omkar Pangarkar and Richard Fan and Yi Gu and Victor Miller and Yonghao Zhuang and Guowei He and Haonan Li and Fajri Koto and Liping Tang and Nikhil Ranjan and Zhiqiang Shen and Xuguang Ren and Roberto Iriondo and Cun Mu and Zhiting Hu and Mark Schulze and Preslav Nakov and Tim Baldwin and Eric P. Xing},
      year={2023},
      eprint={2312.06550},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## About LLM360

LLM360 is an initiative for comprehensive and fully open-sourced LLMs, 
where all training details, model checkpoints, intermediate results, and 
additional analyses are made available to the community. Our goal is to advance 
the field by inviting the community to deepen the understanding of LLMs 
together. As the first step of the project LLM360, we release all intermediate 
model checkpoints, our fully-prepared pre-training dataset, all source code and
configurations, and training details. We are
committed to continually pushing the boundaries of LLMs through this open-source 
effort.

[Visit Us](https://www.llm360.ai/)