Text Generation
Transformers
Safetensors
openelm
custom_code
qicao-apple commited on
Commit
5d479a6
1 Parent(s): 1e08639

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: apple-sample-code-license
4
+ license_link: LICENSE
5
+ ---
6
+
7
+ # OpenELM
8
+
9
+ *Sachin Mehta, Mohammad Hossein Sekhavat, Qingqing Cao, Maxwell Horton, Yanzi Jin, Chenfan Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal, Mohammad Rastegari*
10
+
11
+ We introduce **OpenELM**, a family of **Open**-source **E**fficient **L**anguage **M**odels. We release both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters.
12
+
13
+ Our pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens.
14
+
15
+
16
+
17
+ ## Usage
18
+
19
+ Below we provide an example of loading the model via [HuggingFace Hub](https://huggingface.co/docs/hub/) as:
20
+
21
+ ```python
22
+ import torch
23
+ from transformers import AutoTokenizer, AutoModelForCausalLM
24
+ # obtain access to "meta-llama/Llama-2-7b-hf", then see https://huggingface.co/docs/hub/security-tokens to get a token
25
+ tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", token="hf_xxxx")
26
+
27
+ model_path = "apple/OpenELM-450M"
28
+
29
+ model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
30
+ model = model.cuda().eval()
31
+ prompt = "Once upon a time there was"
32
+ tokenized_prompt = tokenizer(prompt)
33
+ prompt_tensor = torch.tensor(tokenized_prompt["input_ids"], device="cuda").unsqueeze(0)
34
+ output_ids = model.generate(prompt_tensor, max_new_tokens=256, repetition_penalty=1.2, pad_token_id=0)
35
+ output_ids = output_ids[0].tolist()
36
+ output_text = tokenizer.decode(output_ids, skip_special_tokens=True)
37
+ print(f'{model_path=}, {prompt=}\n')
38
+ print(output_text)
39
+
40
+ # below is the output:
41
+ """
42
+ model_path='apple/OpenELM-450M', prompt='Once upon a time there was'
43
+
44
+ Once upon a time there was a little girl who lived in the woods. She had a big heart and she loved to play with her friends. One day, she decided to go for a walk in the woods. As she walked, she saw a beautiful tree. It was so tall that it looked like a mountain. The tree was covered with leaves and flowers.
45
+ The little girl thought that this tree was very pretty. She wanted to climb up to the tree and see what was inside. So, she went up to the tree and climbed up to the top. She was very excited when she saw that the tree was full of beautiful flowers. She also
46
+ """
47
+ ```
48
+
49
+
50
+ ## Main Results
51
+
52
+ ### Zero-Shot
53
+
54
+ | **Model Size** | **ARC-c** | **ARC-e** | **BoolQ** | **HellaSwag** | **PIQA** | **SciQ** | **WinoGrande** | **Average** |
55
+ |-----------------------------------------------------------------------------|-----------|-----------|-----------|---------------|-----------|-----------|----------------|-------------|
56
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 26.45 | 45.08 | **53.98** | 46.71 | 69.75 | **84.70** | **53.91** | 54.37 |
57
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **30.55** | **46.68** | 48.56 | **52.07** | **70.78** | 84.40 | 52.72 | **55.11** |
58
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 27.56 | 48.06 | 55.78 | 53.97 | 72.31 | 87.20 | 58.01 | 57.56 |
59
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **30.38** | **50.00** | **60.37** | **59.34** | **72.63** | **88.00** | **58.96** | **59.95** |
60
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 32.34 | **55.43** | 63.58 | 64.81 | **75.57** | **90.60** | 61.72 | 63.44 |
61
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **37.97** | 52.23 | **70.00** | **71.20** | 75.03 | 89.30 | **62.75** | **65.50** |
62
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 35.58 | 59.89 | 67.40 | 72.44 | 78.24 | **92.70** | 65.51 | 67.39 |
63
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **39.42** | **61.74** | **68.17** | **76.36** | **79.00** | 92.50 | **66.85** | **69.15** |
64
+
65
+ ### LLM360
66
+
67
+ | **Model Size** | **ARC-c** | **HellaSwag** | **MMLU** | **TruthfulQA** | **WinoGrande** | **Average** |
68
+ |-----------------------------------------------------------------------------|-----------|---------------|-----------|----------------|----------------|-------------|
69
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | 47.15 | 25.72 | **39.24** | **53.83** | 38.72 |
70
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | **51.58** | **26.70** | 38.72 | 53.20 | **40.54** |
71
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | 53.86 | **26.01** | 40.18 | 57.22 | 41.50 |
72
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | **59.31** | 25.41 | **40.48** | **58.33** | **43.41** |
73
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | 65.71 | **27.05** | 36.98 | 63.22 | 45.93 |
74
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | **71.83** | 25.65 | **45.95** | **64.72** | **49.94** |
75
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | 73.28 | **26.76** | 34.98 | 67.25 | 48.90 |
76
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | **76.87** | 24.80 | **38.76** | **67.96** | **51.22** |
77
+
78
+
79
+ ### OpenLLM Leaderboard
80
+
81
+ | **Model Size** | **ARC-c** | **CrowS-Pairs** | **HellaSwag** | **MMLU** | **PIQA** | **RACE** | **TruthfulQA** | **WinoGrande** | **Average** |
82
+ |-----------------------------------------------------------------------------|-----------|-----------------|---------------|-----------|-----------|-----------|----------------|----------------|-------------|
83
+ | [OpenELM-270M](https://huggingface.co/apple/OpenELM-270M) | 27.65 | **66.79** | 47.15 | 25.72 | 69.75 | 30.91 | **39.24** | **53.83** | 45.13 |
84
+ | [OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) | **32.51** | 66.01 | **51.58** | **26.70** | **70.78** | 33.78 | 38.72 | 53.20 | **46.66** |
85
+ | [OpenELM-450M](https://huggingface.co/apple/OpenELM-450M) | 30.20 | **68.63** | 53.86 | **26.01** | 72.31 | 33.11 | 40.18 | 57.22 | 47.69 |
86
+ | [OpenELM-450M-Instruct](https://huggingface.co/apple/OpenELM-450M-Instruct) | **33.53** | 67.44 | **59.31** | 25.41 | **72.63** | **36.84** | **40.48** | **58.33** | **49.25** |
87
+ | [OpenELM-1_1B](https://huggingface.co/apple/OpenELM-1_1B) | 36.69 | **71.74** | 65.71 | **27.05** | **75.57** | 36.46 | 36.98 | 63.22 | 51.68 |
88
+ | [OpenELM-1_1B-Instruct](https://huggingface.co/apple/OpenELM-1_1B-Instruct) | **41.55** | 71.02 | **71.83** | 25.65 | 75.03 | **39.43** | **45.95** | **64.72** | **54.40** |
89
+ | [OpenELM-3B](https://huggingface.co/apple/OpenELM-3B) | 42.24 | **73.29** | 73.28 | **26.76** | 78.24 | **38.76** | 34.98 | 67.25 | 54.35 |
90
+ | [OpenELM-3B-Instruct](https://huggingface.co/apple/OpenELM-3B-Instruct) | **47.70** | 72.33 | **76.87** | 24.80 | **79.00** | 38.47 | **38.76** | **67.96** | **55.73** |
91
+
92
+ See the technical report for more results and comparison.
93
+
94
+ ## Evaluation
95
+
96
+ ### Setup
97
+
98
+ Install the following dependencies:
99
+
100
+ ```bash
101
+
102
+ # install public lm-eval-harness
103
+
104
+ harness_repo="public-lm-eval-harness"
105
+ git clone https://github.com/EleutherAI/lm-evaluation-harness ${harness_repo}
106
+ cd ${harness_repo}
107
+ # use main branch on 03-15-2024, SHA is dc90fec
108
+ git checkout dc90fec
109
+ pip install -e .
110
+ cd ..
111
+
112
+ # 66d6242 is the main branch on 2024-04-01
113
+ pip install datasets@git+https://github.com/huggingface/datasets.git@66d6242
114
+ pip install tokenizers>=0.15.2 transformers>=4.38.2 sentencepiece>=0.2.0
115
+
116
+ ```
117
+
118
+ ### Evaluate OpenELM
119
+
120
+ ```bash
121
+
122
+ # OpenELM-270M
123
+ hf_model=OpenELM-270M
124
+
125
+ # this flag is needed because lm-eval-harness set add_bos_token to False by default, but OpenELM uses LLaMa tokenizer which requires add_bos_token to be True
126
+ add_bos_token=True
127
+ batch_size=1
128
+
129
+ mkdir lm_eval_output
130
+
131
+ shot=0
132
+ task=arc_challenge,arc_easy,boolq,hellaswag,piqa,race,winogrande,sciq,truthfulqa_mc2
133
+ lm_eval --model hf \
134
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
135
+ --tasks ${task} \
136
+ --device cuda:0 \
137
+ --num_fewshot ${shot} \
138
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
139
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
140
+
141
+ shot=5
142
+ task=mmlu,winogrande
143
+ lm_eval --model hf \
144
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
145
+ --tasks ${task} \
146
+ --device cuda:0 \
147
+ --num_fewshot ${shot} \
148
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
149
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
150
+
151
+ shot=25
152
+ task=arc_challenge,crows_pairs_english
153
+ lm_eval --model hf \
154
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
155
+ --tasks ${task} \
156
+ --device cuda:0 \
157
+ --num_fewshot ${shot} \
158
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
159
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
160
+
161
+ shot=10
162
+ task=hellaswag
163
+ lm_eval --model hf \
164
+ --model_args pretrained=${hf_model},trust_remote_code=True,add_bos_token=${add_bos_token} \
165
+ --tasks ${task} \
166
+ --device cuda:0 \
167
+ --num_fewshot ${shot} \
168
+ --output_path ./lm_eval_output/${hf_model//\//_}_${task//,/_}-${shot}shot \
169
+ --batch_size ${batch_size} 2>&1 | tee ./lm_eval_output/eval-${hf_model//\//_}_${task//,/_}-${shot}shot.log
170
+
171
+ ```
172
+
173
+
174
+ ## Bias, Risks, and Limitations
175
+
176
+ Our OpenELM models are not trained with any safety guarantees, the model outputs can be potentially inaccurate, harmful or contain biased information. produce inaccurate, biased or other objectionable responses to user prompts. Therefore, users and developers should conduct extensive safety testing and filtering suited to their specific needs.
177
+