CultriX commited on
Commit
56c2d4c
·
verified ·
1 Parent(s): d4bd4e9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ DPO of MonaTrix-v4 with this dataset: https://huggingface.co/datasets/CultriX/dpo-mix-ambrosia-cleaned
5
+
6
+ ---
7
+ tags:
8
+ - merge
9
+ - mergekit
10
+ - lazymergekit
11
+ - Kukedlc/NeuralMaxime-7B-slerp
12
+ - eren23/ogno-monarch-jaskier-merge-7b
13
+ - eren23/dpo-binarized-NeutrixOmnibe-7B
14
+ base_model:
15
+ - Kukedlc/NeuralMaxime-7B-slerp
16
+ - eren23/ogno-monarch-jaskier-merge-7b
17
+ - eren23/dpo-binarized-NeutrixOmnibe-7B
18
+ license: apache-2.0
19
+ ---
20
+
21
+ # MonaTrix-v4
22
+
23
+ MonaTrix-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
24
+ * [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
25
+ * [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
26
+ * [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
27
+
28
+ ## 🧩 Configuration
29
+
30
+ ```yaml
31
+ models:
32
+ - model: mistralai/Mistral-7B-v0.1
33
+ # No parameters necessary for base model
34
+ - model: Kukedlc/NeuralMaxime-7B-slerp
35
+ #Emphasize the beginning of Vicuna format models
36
+ parameters:
37
+ weight: 0.36
38
+ density: 0.65
39
+ - model: eren23/ogno-monarch-jaskier-merge-7b
40
+ parameters:
41
+ weight: 0.34
42
+ density: 0.6
43
+ # Vicuna format
44
+ - model: eren23/dpo-binarized-NeutrixOmnibe-7B
45
+ parameters:
46
+ weight: 0.3
47
+ density: 0.6
48
+
49
+ merge_method: dare_ties
50
+ base_model: mistralai/Mistral-7B-v0.1
51
+ parameters:
52
+ int8_mask: true
53
+ dtype: bfloat16
54
+ random_seed: 0
55
+ ```
56
+
57
+ ## 💻 Usage
58
+
59
+ ```python
60
+ !pip install -qU transformers accelerate
61
+
62
+ from transformers import AutoTokenizer
63
+ import transformers
64
+ import torch
65
+
66
+ model = "CultriX/MonaTrix-v4"
67
+ messages = [{"role": "user", "content": "What is a large language model?"}]
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained(model)
70
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
71
+ pipeline = transformers.pipeline(
72
+ "text-generation",
73
+ model=model,
74
+ torch_dtype=torch.float16,
75
+ device_map="auto",
76
+ )
77
+
78
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
79
+ print(outputs[0]["generated_text"])
80
+ ```