Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,11 @@ This is an experimental weighted merge between:
|
|
15 |
- [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
|
16 |
- [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras)
|
17 |
|
|
|
|
|
|
|
|
|
|
|
18 |
The merge was performed by a gradient merge script (apply-lora-weight-ltl.py) from [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools) by Zaraki.
|
19 |
|
20 |
Thanks to Zaraki for the inspiration and help.
|
|
|
15 |
- [Pygmalion 2 13b](https://huggingface.co/PygmalionAI/pygmalion-2-13b)
|
16 |
- [Ausboss's Llama2 SuperCOT loras](https://huggingface.co/ausboss/llama2-13b-supercot-loras)
|
17 |
|
18 |
+
Quantizations provided by us and TheBloke:
|
19 |
+
- [Exl2](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT-weighed-exl2)
|
20 |
+
- [GPTQ](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GPTQ)
|
21 |
+
- [GGUF](https://huggingface.co/TheBloke/Pygmalion-2-13B-SuperCOT-weighed-GGUF)
|
22 |
+
|
23 |
The merge was performed by a gradient merge script (apply-lora-weight-ltl.py) from [zaraki-tools](https://github.com/zarakiquemparte/zaraki-tools) by Zaraki.
|
24 |
|
25 |
Thanks to Zaraki for the inspiration and help.
|