wolfram commited on
Commit
bdfc8ff
·
verified ·
1 Parent(s): 05e4cc0

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +106 -0
  3. miquliz-120b.IQ3_XXS.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ miquliz-120b.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - 152334H/miqu-1-70b-sf
4
+ - lizpreciatior/lzlv_70b_fp16_hf
5
+ language:
6
+ - en
7
+ - de
8
+ - fr
9
+ - es
10
+ - it
11
+ library_name: transformers
12
+ tags:
13
+ - mergekit
14
+ - merge
15
+ ---
16
+ # miquliz-120b-GGUF
17
+
18
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/RFEW_K0ABp3k_N3j02Ki4.jpeg)
19
+
20
+ - HF FP16: [wolfram/miquliz-120b](https://huggingface.co/wolfram/miquliz-120b)
21
+
22
+ This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit).
23
+
24
+ Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
25
+
26
+ Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
27
+
28
+ ## Prompt template: Mistral
29
+
30
+ ```
31
+ <s>[INST] {prompt} [/INST]
32
+ ```
33
+
34
+ See also: [🐺🐦‍⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
35
+
36
+ ## Model Details
37
+
38
+ - Max Context: 32768 tokens
39
+ - Layers: 137
40
+
41
+ ## Merge Details
42
+
43
+ ### Merge Method
44
+
45
+ This model was merged using the passthrough merge method.
46
+
47
+ ### Models Merged
48
+
49
+ The following models were included in the merge:
50
+
51
+ - [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
52
+ - [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
53
+
54
+ ### Configuration
55
+
56
+ The following YAML configuration was used to produce this model:
57
+
58
+ ```yaml
59
+ dtype: float16
60
+ merge_method: passthrough
61
+ slices:
62
+ - sources:
63
+ - layer_range: [0, 16]
64
+ model: 152334H/miqu-1-70b-sf
65
+ - sources:
66
+ - layer_range: [8, 24]
67
+ model: lizpreciatior/lzlv_70b_fp16_hf
68
+ - sources:
69
+ - layer_range: [17, 32]
70
+ model: 152334H/miqu-1-70b-sf
71
+ - sources:
72
+ - layer_range: [25, 40]
73
+ model: lizpreciatior/lzlv_70b_fp16_hf
74
+ - sources:
75
+ - layer_range: [33, 48]
76
+ model: 152334H/miqu-1-70b-sf
77
+ - sources:
78
+ - layer_range: [41, 56]
79
+ model: lizpreciatior/lzlv_70b_fp16_hf
80
+ - sources:
81
+ - layer_range: [49, 64]
82
+ model: 152334H/miqu-1-70b-sf
83
+ - sources:
84
+ - layer_range: [57, 72]
85
+ model: lizpreciatior/lzlv_70b_fp16_hf
86
+ - sources:
87
+ - layer_range: [65, 80]
88
+ model: 152334H/miqu-1-70b-sf
89
+ ```
90
+
91
+ ## Credits & Special Thanks
92
+
93
+ - 1st model:
94
+ - original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
95
+ - leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
96
+ - f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
97
+ - 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
98
+ - mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
99
+ - mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
100
+ - gguf quantization: [ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++](https://github.com/ggerganov/llama.cpp)
101
+
102
+ ### Support
103
+
104
+ - [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
105
+
106
+ #### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
miquliz-120b.IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7faa5c2b1051656ea03bf043e1e22e0faf2f0721eabf520d40d7137a6088adae
3
+ size 45883977568