Samee-ur commited on
Commit
3fd317e
1 Parent(s): 615246a

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - OpenPipe/mistral-ft-optimized-1218
8
+ - mlabonne/NeuralHermes-2.5-Mistral-7B
9
+ base_model:
10
+ - OpenPipe/mistral-ft-optimized-1218
11
+ - mlabonne/NeuralHermes-2.5-Mistral-7B
12
+ ---
13
+
14
+ # NeuralPipe-7B-TIES
15
+
16
+ NeuralPipe-7B-TIES is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
17
+ * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
18
+ * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
19
+
20
+ ## 🧩 Configuration
21
+
22
+ ```yaml
23
+ models:
24
+ - model: mistralai/Mistral-7B-v0.1
25
+ # no parameters necessary for base model
26
+ - model: OpenPipe/mistral-ft-optimized-1218
27
+ parameters:
28
+ density: 0.5
29
+ weight: 0.5
30
+ - model: mlabonne/NeuralHermes-2.5-Mistral-7B
31
+ parameters:
32
+ density: 0.5
33
+ weight: 0.3
34
+ merge_method: ties
35
+ base_model: mistralai/Mistral-7B-v0.1
36
+ parameters:
37
+ normalize: true
38
+ dtype: float16
39
+ ```
40
+
41
+ ## 💻 Usage
42
+
43
+ ```python
44
+ !pip install -qU transformers accelerate
45
+
46
+ from transformers import AutoTokenizer
47
+ import transformers
48
+ import torch
49
+
50
+ model = "Samee-ur/NeuralPipe-7B-TIES"
51
+ messages = [{"role": "user", "content": "What is a large language model?"}]
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained(model)
54
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
55
+ pipeline = transformers.pipeline(
56
+ "text-generation",
57
+ model=model,
58
+ torch_dtype=torch.float16,
59
+ device_map="auto",
60
+ )
61
+
62
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
63
+ print(outputs[0]["generated_text"])
64
+ ```