cstr commited on
Commit
0974201
1 Parent(s): da9ac46

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Dampfinchen/Llama-3.1-8B-Ultra-Instruct
4
+ tags:
5
+ - merge
6
+ - mergekit
7
+ - Undi95/Meta-Llama-3.1-8B-Claude
8
+ - Dampfinchen/Llama-3.1-8B-Ultra-Instruct
9
+ license: llama3.1
10
+ language:
11
+ - en
12
+ - de
13
+ ---
14
+
15
+ # llama3.1-8b-spaetzle-v59
16
+
17
+ llama3.1-8b-spaetzle-v59 is a dare ties merge of the models
18
+ * [Undi95/Meta-Llama-3.1-8B-Claude](https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude)
19
+ * [Dampfinchen/Llama-3.1-8B-Ultra-Instruct](https://huggingface.co/Dampfinchen/Llama-3.1-8B-Ultra-Instruct)
20
+
21
+ ## 🧩 Configuration
22
+
23
+ ```yaml
24
+ models:
25
+ - model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct
26
+ # no parameters necessary for base model
27
+ - model: Undi95/Meta-Llama-3.1-8B-Claude
28
+ parameters:
29
+ density: 0.65
30
+ weight: 0.4
31
+ merge_method: dare_ties
32
+ base_model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct
33
+ parameters:
34
+ int8_mask: true
35
+ dtype: bfloat16
36
+ random_seed: 0
37
+ tokenizer_source: base
38
+ ```
39
+
40
+ ## 💻 Usage
41
+
42
+ ```python
43
+ !pip install -qU transformers accelerate
44
+
45
+ from transformers import AutoTokenizer
46
+ import transformers
47
+ import torch
48
+
49
+ model = "cstr/llama3.1-8b-spaetzle-v59"
50
+ messages = [{"role": "user", "content": "What is a large language model?"}]
51
+
52
+ tokenizer = AutoTokenizer.from_pretrained(model)
53
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
54
+ pipeline = transformers.pipeline(
55
+ "text-generation",
56
+ model=model,
57
+ torch_dtype=torch.float16,
58
+ device_map="auto",
59
+ )
60
+
61
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
62
+ print(outputs[0]["generated_text"])
63
+ ```