Update README.md
Browse files
README.md
CHANGED
@@ -3,9 +3,21 @@ language:
|
|
3 |
- en
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
|
|
6 |
# DaringMaid-20B
|
7 |
-
My
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
|
|
|
|
|
|
9 |
|
10 |
# Quants
|
11 |
[GGUF](https://huggingface.co/Kooten/DaringMaid-20B-GGUF), EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-6bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-3bpw-exl2)
|
@@ -23,16 +35,3 @@ Below is an instruction that describes a task. Write a response that appropriate
|
|
23 |
### Response:
|
24 |
|
25 |
```
|
26 |
-
# Parts
|
27 |
-
|
28 |
-
- I used [DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2
|
29 |
-
- [Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course
|
30 |
-
- [Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
|
31 |
-
- I enjoyed [Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
|
32 |
-
- There was recently a model that tried to pass itself off as [MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with it before i knew it was just MythoMax but i actually really liked the result, i think it improved the ability to follow instructions a lot.
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
|
|
3 |
- en
|
4 |
pipeline_tag: text-generation
|
5 |
---
|
6 |
+
|
7 |
# DaringMaid-20B
|
8 |
+
My Initial goal was to make a Noromaid that's smarter and better at following instructions.
|
9 |
+
|
10 |
+
After trying a bunch of different recipes I think this one turned out pretty good
|
11 |
+
|
12 |
+
- I used [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) as a base to as its supposed "improve overall knowledge, precise communication, conceptual understanding, and technical skill" over the base llama2.
|
13 |
+
- [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) of course.
|
14 |
+
- [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) has been recommended again recently and its still really good so in the mixer it goes
|
15 |
+
- I enjoyed [tavtav/Rose](https://huggingface.co/tavtav/Rose-20B) so i threw in a bit of [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2)
|
16 |
+
- There was recently a model that tried to pass itself off as [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b), i made a merge with that model before it was revealed to be MythoMax and it turned out pretty good.
|
17 |
|
18 |
+
The .yml config files with the exact merges can be found in the "Recipe" folder in the [fp16 repo](https://huggingface.co/Kooten/DaringMaid)
|
19 |
+
|
20 |
+
There is no wisdom to be found there, i do not know what i am doing.
|
21 |
|
22 |
# Quants
|
23 |
[GGUF](https://huggingface.co/Kooten/DaringMaid-20B-GGUF), EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-6bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-3bpw-exl2)
|
|
|
35 |
### Response:
|
36 |
|
37 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|