mlabonne commited on
Commit
a705410
·
verified ·
1 Parent(s): abc7d3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -10,9 +10,9 @@ base_model:
10
  - shadowml/FoxBeagle-7B
11
  ---
12
 
13
- # Monarch-7B
14
 
15
- Monarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
  * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
17
  * [shadowml/BeagleX-7B](https://huggingface.co/shadowml/BeagleX-7B)
18
  * [shadowml/FoxBeagle-7B](https://huggingface.co/shadowml/FoxBeagle-7B)
@@ -51,7 +51,7 @@ from transformers import AutoTokenizer
51
  import transformers
52
  import torch
53
 
54
- model = "mlabonne/Monarch-7B"
55
  messages = [{"role": "user", "content": "What is a large language model?"}]
56
 
57
  tokenizer = AutoTokenizer.from_pretrained(model)
 
10
  - shadowml/FoxBeagle-7B
11
  ---
12
 
13
+ # BeagleB-7B
14
 
15
+ BeagleB-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
  * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
17
  * [shadowml/BeagleX-7B](https://huggingface.co/shadowml/BeagleX-7B)
18
  * [shadowml/FoxBeagle-7B](https://huggingface.co/shadowml/FoxBeagle-7B)
 
51
  import transformers
52
  import torch
53
 
54
+ model = "mlabonne/BeagleB-7B"
55
  messages = [{"role": "user", "content": "What is a large language model?"}]
56
 
57
  tokenizer = AutoTokenizer.from_pretrained(model)