emozilla commited on
Commit
cd0e649
1 Parent(s): 2e055bf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - allenai/dolma
5
+ ---
6
+ # OLMo-Bitnet-1B
7
+
8
+ OLMo-Bitnet-1B is a 1B parameter model trained using the method described in [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764).
9
+ The result of this is that all of the parameter weights take only the values -1, 0, or 1.
10
+
11
+ It was trained on a 60B subset of the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset, so it is merely a research proof-of-concept to test out the methodolgy.
12
+
13
+ A separate training run was run with the exact same hyperparameters, but using standard fp16 weights.
14
+ The comparison can be found in [this wandb report](https://api.wandb.ai/links/emozilla/evltqiv7).
15
+
16
+ Sample inference code
17
+ ```python
18
+ import torch
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextStreamer
20
+
21
+ tokenizer = AutoTokenizer.from_pretrained("NousResearch/OLMo-Bitnet-1B")
22
+ model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B",
23
+ torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
24
+
25
+ streamer = TextStreamer(tokenizer)
26
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, pad_token_id=tokenizer.eos_token_id,
27
+ temperature=0.8, repetition_penalty=1.1, do_sample=True,streamer=streamer)
28
+ pipe("The capitol of Paris is", max_new_tokens=256)
29
+ ```
30
+
31
+ Training was performed using [OLMo](https://github.com/allenai/OLMo).