File size: 1,494 Bytes
cd0e649
 
 
 
 
 
 
 
 
41f21a8
cd0e649
 
 
 
9c9783f
 
 
cd0e649
cd17870
 
 
 
 
cd0e649
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: apache-2.0
datasets:
- allenai/dolma
---
# OLMo-Bitnet-1B

OLMo-Bitnet-1B is a 1B parameter model trained using the method described in [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764).

It was trained on the first 60B tokens of the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset, so it is merely a research proof-of-concept to test out the methodolgy.

A separate training run was run with the exact same hyperparameters, but using standard fp16 weights.
The comparison can be found in [this wandb report](https://api.wandb.ai/links/emozilla/evltqiv7).


![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/NAw-hyWJl5ihVsAPqz3Xe.png)

Sample inference code

```sh
pip install ai2-olmo
```

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextStreamer

tokenizer = AutoTokenizer.from_pretrained("NousResearch/OLMo-Bitnet-1B")
model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B",
    torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

streamer = TextStreamer(tokenizer)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, pad_token_id=tokenizer.eos_token_id,
    temperature=0.8, repetition_penalty=1.1, do_sample=True,streamer=streamer)
pipe("The capitol of Paris is",  max_new_tokens=256)
```

Training was performed using [OLMo](https://github.com/allenai/OLMo).