yukiontheiceberg
commited on
Commit
•
6be81f3
1
Parent(s):
bd4b02d
Update README.md
Browse files
README.md
CHANGED
@@ -13,15 +13,12 @@ tags:
|
|
13 |
|
14 |
We present CrystalChat, an instruction following model finetuned from [LLM360/CrystalCoder](https://huggingface.co/LLM360/CrystalCoder).
|
15 |
|
16 |
-
| Model | Trained Tokens | ARC | HellaSwag | MMLU (5-shot) | TruthfulQA | Language Avg. | HumanEval (pass@1) | MBPP (pass@1) | Coding Avg. | Avg. of Avg.|
|
17 |
-
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
18 |
-
| Mistral
|
19 |
-
| **CrystalChat 7B** | 1.4T | 51.71 | 76.12 | 53.22 | 47.29 |
|
20 |
-
|
|
21 |
-
|
|
22 |
-
| OpenLLaMA v2 7B | 1T | 43.60 | 72.20 | 41.29 | 35.54 | 48.18 | 15.32 | 12.69 | 28.01 | 38.10 |
|
23 |
-
| LLaMA 2 7B | 2T | 53.07 | 77.74 | 43.80 | 38.98 | 53.39 | 13.05 | 20.09 | 16.57 | 34.98 |
|
24 |
-
| StarCoder-15B | 1.03 | - | - | - | - | - | 33.63 | 43.28 | 38.46 | - |
|
25 |
|
26 |
## Model Description
|
27 |
|
@@ -38,14 +35,15 @@ We present CrystalChat, an instruction following model finetuned from [LLM360/Cr
|
|
38 |
|
39 |
```python
|
40 |
import torch
|
41 |
-
from transformers import
|
42 |
|
43 |
-
|
44 |
-
|
|
|
45 |
|
46 |
prompt = 'int add(int x, int y) {'
|
47 |
|
48 |
-
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
49 |
gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)
|
50 |
|
51 |
print("-"*20 + "Output for model" + 20 * '-')
|
|
|
13 |
|
14 |
We present CrystalChat, an instruction following model finetuned from [LLM360/CrystalCoder](https://huggingface.co/LLM360/CrystalCoder).
|
15 |
|
16 |
+
| Model | Trained Tokens | ARC | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | Language Avg. | HumanEval (pass@1) | MBPP (pass@1) | Coding Avg. | Avg. of Avg.|
|
17 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
18 |
+
| Mistral-7B-Instruct-v0.1 | - | 54.86 | 75.71 | 55.56 | 32.00 | 74.27 | 55.90 | 58.05 | 29.27 | 31.96 | 30.62 | 44.34 |
|
19 |
+
| **CrystalChat 7B** | 1.4T | 51.71 | 76.12 | 53.22 | 28.05 | 70.64 | 47.29 | 53.29 | 34.12 | 39.11 | 36.62 | 50.07 |
|
20 |
+
| CodeLlama-7b-Instruct | 2.5T | 43.35 | 66.14 | 42.75 | 15.92 | 64.33 | 39.23 | 45.29 | 34.12 | 38.91 | 36.52 | 40.91 |
|
21 |
+
| Llama-2-7b-Chat | 2T | 53.07 | 78.39 | 48.42 | 18.88 | 73.09 | 45.30 | 52.86 | 13.26 | 17.43 | 15.35 | 34.11 |
|
|
|
|
|
|
|
22 |
|
23 |
## Model Description
|
24 |
|
|
|
35 |
|
36 |
```python
|
37 |
import torch
|
38 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
39 |
|
40 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("LLM360/CrystalChat", trust_remote_code=True)
|
42 |
+
model = AutoModelForCausalLM.from_pretrained("LLM360/CrystalChat", trust_remote_code=True).to(device)
|
43 |
|
44 |
prompt = 'int add(int x, int y) {'
|
45 |
|
46 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
47 |
gen_tokens = model.generate(input_ids, do_sample=True, max_length=400)
|
48 |
|
49 |
print("-"*20 + "Output for model" + 20 * '-')
|