Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,58 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: llama2
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
library_name: transformers
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- llama
|
8 |
+
- llama-2
|
9 |
license: llama2
|
10 |
---
|
11 |
+
|
12 |
+
# Model Card: Pygmalion-2-13b-SuperCOT-weighted
|
13 |
+
|
14 |
+
This is an experimental weighted merge between:
|
15 |
+
- [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
|
16 |
+
- [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras)
|
17 |
+
|
18 |
+
The merge was performed by a gradient merge script (apply-lora-weight-ltl.py) from [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools) by Zaraki.
|
19 |
+
|
20 |
+
Thanks to Zaraki for the inspiration and help.
|
21 |
+
|
22 |
+
This merge differs from the previous Pyg-2-SuperCOT merges. The first iteration of the SuperCOT loras were used here since it performed better than SuperCOT2.
|
23 |
+
|
24 |
+
The SuperCOT lora was merged with the following layer weights (basically 50/50. The exact ratio is 0.51)
|
25 |
+
```
|
26 |
+
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0.5,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
|
27 |
+
```
|
28 |
+
|
29 |
+
Here is an image to help visualize this merge. The light blue is Pygmalion-2-13b and the light green is the SuperCOT lora:
|
30 |
+
![gradient-image](https://files.catbox.moe/ndbz7t.png)
|
31 |
+
|
32 |
+
## Usage:
|
33 |
+
|
34 |
+
Since this is an experimental weight merge between Pygmalion-2 and SuperCOT, the following instruction formats should work:
|
35 |
+
|
36 |
+
Metharme:
|
37 |
+
|
38 |
+
```
|
39 |
+
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
|
40 |
+
```
|
41 |
+
|
42 |
+
Alpaca:
|
43 |
+
|
44 |
+
```
|
45 |
+
### Instruction:
|
46 |
+
Your instruction or question here.
|
47 |
+
### Response:
|
48 |
+
```
|
49 |
+
|
50 |
+
## Bias, Risks, and Limitations
|
51 |
+
|
52 |
+
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
|
53 |
+
|
54 |
+
In addition, this merge is experimental from our own testing. Your results may vary.
|
55 |
+
|
56 |
+
## Training Details
|
57 |
+
|
58 |
+
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|