natolambert commited on
Commit
4252810
1 Parent(s): f6558ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -18,9 +18,11 @@ base_model: allenai/OLMoE-1B-7B-0924-SFT
18
 
19
  > OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924) that has been adapted via SFT and DPO from [OLMoE-1B-7B](https://hf.co/OLMoE/OLMoE-1B-7B-0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B-Chat. OLMoE is 100% open-source.
20
 
21
- - Code: https://github.com/allenai/OLMoE
22
- - Paper:
23
- - Logs: https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt
 
 
24
 
25
  # Use
26
 
@@ -78,6 +80,11 @@ Branches:
78
  | **+SFT** | 51.4 | 40.5 | 38.0 | 51.6 | 69.2 | 84.1 | 43.3 | 54.0 |
79
  | **+DPO** | 51.9 | 45.5 | 37.0 | 54.8 | **84.0** | 82.6 | **48.1** | **57.7** |
80
 
 
 
 
 
 
81
  # Citation
82
 
83
  ```bibtex
 
18
 
19
  > OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924) that has been adapted via SFT and DPO from [OLMoE-1B-7B](https://hf.co/OLMoE/OLMoE-1B-7B-0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B-Chat. OLMoE is 100% open-source.
20
 
21
+ This information and more can also be found on the [**OLMoE GitHub repository**](https://github.com/allenai/OLMoE).
22
+ - **Paper**: (Soon)
23
+ - **Pretraining** [Checkpoints](https://hf.co/allenai/OLMoE-1B-7B-0924), [Code](https://github.com/allenai/OLMo/tree/Muennighoff/MoE), [Data](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) and [Logs](https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3).
24
+ - **SFT (Supervised Fine-Tuning)** [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), [Code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [Data](https://hf.co/datasets/allenai/tulu-v3.1-mix-preview-4096-OLMoE) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-sft-logs.txt).
25
+ - **DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization)**, [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct), [Preference Data](https://hf.co/datasets/allenai/ultrafeedback_binarized_cleaned), [DPO code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [KTO code](https://github.com/Muennighoff/kto/blob/master/kto.py) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt).
26
 
27
  # Use
28
 
 
80
  | **+SFT** | 51.4 | 40.5 | 38.0 | 51.6 | 69.2 | 84.1 | 43.3 | 54.0 |
81
  | **+DPO** | 51.9 | 45.5 | 37.0 | 54.8 | **84.0** | 82.6 | **48.1** | **57.7** |
82
 
83
+ # Bias, Risks, and Limitations
84
+ This adapted OLMo model is a research artifact. It is intended to benefit the research community interested in understanding the safety properties of LLMs and developers building safety tools for LLMs. For this reason, the model does not include a specific safety filter or safety training data.
85
+ While the model refuses some requests, it is possible for the model to generate harmful and sensitive content from some user prompts. We recommend developers exercise caution and consider the risks of the applications of this technology. Furthermore, developers should consider implementing safeguards for biases, privacy, and other potential harms when appropriate. Finally, as with every LLM, OLMo may produce factual-sounding outputs that may not be true, so developers and users are encouraged to confirm such outputs before relying on them. All users of this model are responsible for how they use the model.
86
+
87
+
88
  # Citation
89
 
90
  ```bibtex