Quazim0t0 commited on
Commit
735ad91
·
verified ·
1 Parent(s): 7105ad2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -43
README.md CHANGED
@@ -5,46 +5,3 @@ tags:
5
  - lazymergekit
6
  ---
7
 
8
- # Jekyl-8b-sce
9
-
10
- Jekyl-8b-sce is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
11
-
12
- ## 🧩 Configuration
13
-
14
- ```yaml
15
- models:
16
- # Pivot model
17
- - model: jaspionjader/bh-61
18
- # Target models
19
- - model: bunnycore/HyperLlama-3.1-8B
20
- merge_method: sce
21
- base_model: jaspionjader/Kosmos-EVAA-immersive-mix-v45.1-8B
22
- parameters:
23
- select_topk: 1.5
24
- dtype: bfloat16
25
- ```
26
-
27
- ## 💻 Usage
28
-
29
- ```python
30
- !pip install -qU transformers accelerate
31
-
32
- from transformers import AutoTokenizer
33
- import transformers
34
- import torch
35
-
36
- model = "Quazim0t0/Jekyl-8b-sce"
37
- messages = [{"role": "user", "content": "What is a large language model?"}]
38
-
39
- tokenizer = AutoTokenizer.from_pretrained(model)
40
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
41
- pipeline = transformers.pipeline(
42
- "text-generation",
43
- model=model,
44
- torch_dtype=torch.float16,
45
- device_map="auto",
46
- )
47
-
48
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
49
- print(outputs[0]["generated_text"])
50
- ```
 
5
  - lazymergekit
6
  ---
7