ema19 commited on
Commit
e04edaf
1 Parent(s): 26acdf1

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +29 -2
README.md CHANGED
@@ -1,16 +1,18 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - merge
5
  - mergekit
6
  - lazymergekit
7
  - meta-llama/Llama-2-7b-hf
8
  - databricks/dolly-v2-7b
 
 
 
9
  ---
10
 
11
  # LLAMAdolly-7B-slerp
12
 
13
- LLAMAdolly-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
14
  * [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
15
  * [databricks/dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b)
16
 
@@ -34,4 +36,29 @@ parameters:
34
  - value: 0.5
35
  dtype: bfloat16
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ```
 
1
  ---
 
2
  tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
  - meta-llama/Llama-2-7b-hf
7
  - databricks/dolly-v2-7b
8
+ base_model:
9
+ - meta-llama/Llama-2-7b-hf
10
+ - databricks/dolly-v2-7b
11
  ---
12
 
13
  # LLAMAdolly-7B-slerp
14
 
15
+ LLAMAdolly-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
  * [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
17
  * [databricks/dolly-v2-7b](https://huggingface.co/databricks/dolly-v2-7b)
18
 
 
36
  - value: 0.5
37
  dtype: bfloat16
38
 
39
+ ```
40
+
41
+ ## 💻 Usage
42
+
43
+ ```python
44
+ !pip install -qU transformers accelerate
45
+
46
+ from transformers import AutoTokenizer
47
+ import transformers
48
+ import torch
49
+
50
+ model = "ema19/LLAMAdolly-7B-slerp"
51
+ messages = [{"role": "user", "content": "What is a large language model?"}]
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(model)
54
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
55
+ pipeline = transformers.pipeline(
56
+ "text-generation",
57
+ model=model,
58
+ torch_dtype=torch.float16,
59
+ device_map="auto",
60
+ )
61
+
62
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
63
+ print(outputs[0]["generated_text"])
64
  ```