grimjim commited on
Commit
04d217e
1 Parent(s): 3a82ec2

Initial release.

Browse files
.gitattributes CHANGED
@@ -4,6 +4,7 @@
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
 
7
  *.gz filter=lfs diff=lfs merge=lfs -text
8
  *.h5 filter=lfs diff=lfs merge=lfs -text
9
  *.joblib filter=lfs diff=lfs merge=lfs -text
 
4
  *.bz2 filter=lfs diff=lfs merge=lfs -text
5
  *.ckpt filter=lfs diff=lfs merge=lfs -text
6
  *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gguf filter=lfs diff=lfs merge=lfs -text
8
  *.gz filter=lfs diff=lfs merge=lfs -text
9
  *.h5 filter=lfs diff=lfs merge=lfs -text
10
  *.joblib filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,57 @@
1
  ---
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
4
+ - grimjim/kukulemon-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
  license: cc-by-nc-4.0
10
+
11
  ---
12
+ # kukulemon-32K-7B-GGUF
13
+
14
+ These are GGUF quants of a proof of concept a merge capable of functional 32K context length while being derived from [kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
15
+ The functioning 32K context window has been folded in via a merger of Mistral 7B v0.2 models.
16
+ SLERP merge appears to be viable, but DARE-TIES merge risks producing a damaged model and is therefore not recommended.
17
+
18
+ Although the resulting model natively supports Alpaca prompt, I've tested with ChatML prompts successfuly. Medium temperature (around 1) with low minP (e.g., 0.01) works with ChatML prompts in my most recent testing.
19
+
20
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
21
+
22
+ ## Merge Details
23
+ ### Merge Method
24
+
25
+ This model was merged using the SLERP merge method.
26
+
27
+ ### Models Merged
28
+
29
+ The following models were included in the merge:
30
+ * [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B)
31
+ * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
32
+
33
+ ### Configuration
34
+
35
+ The following YAML configuration was used to produce this model:
36
+
37
+ ```yaml
38
+ slices:
39
+ - sources:
40
+ - model: grimjim/kukulemon-7B
41
+ layer_range: [0, 32]
42
+ - model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
43
+ layer_range: [0, 32]
44
+ # or, the equivalent models: syntax:
45
+ # models:
46
+ merge_method: slerp
47
+ base_model: grimjim/kukulemon-7B
48
+ parameters:
49
+ t:
50
+ - filter: self_attn
51
+ value: [0, 0.5, 0.3, 0.7, 1]
52
+ - filter: mlp
53
+ value: [1, 0.5, 0.7, 0.3, 0]
54
+ - value: 0.5 # fallback for rest of tensors
55
+ dtype: bfloat16
56
+
57
+ ```
kukulemon-32K-7B.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b35cd33433796ed7a327e8cf95a82550f252e356910f5daf6d5176fa005e4c4
3
+ size 4368439040
kukulemon-32K-7B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2ef788258506e9db035dabd989cf9312090366329ec3d2fdd8dd3f150310e48
3
+ size 5131409152
kukulemon-32K-7B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac0e5d7b2bd1b5dfb077c42be5b0da99ca3b6feb683fca4c26e26c9615406aeb
3
+ size 5942064896
kukulemon-32K-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d03d9c2eca978bf240fe862140a1eae41662576571aa8899ba9e86efff029682
3
+ size 7695857376