File size: 1,712 Bytes
f522d62
5c8b389
 
 
f522d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f30f3d
f522d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model:
- Test157t/Pasta-Lake-7b
- Test157t/Prima-LelantaclesV4-7b-16k
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
pipeline_tag: text-generation
inference: false
---
**GGUF quantizations for [ChaoticNeutrals/Prima-LelantaclesV5-7b](https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b).**

*If you want any specific quantization to be added, feel free to ask.*

All credits belong to the respective creators.

`Base⇢ GGUF(F16)⇢ GGUF(Quants)`

Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-b2222. **For --imatrix, included reference `imatrix.dat` was used.**

# Original model information:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/iZWd2VINrrl-ToMoD9ZUp.png)

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/_AugGaelWylUuIIDmYOXG.jpeg)

https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b/tree/main/ST%20presets

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method.

The following models were included in the merge:
* [Test157t/Pasta-Lake-7b](https://huggingface.co/Test157t/Pasta-Lake-7b) +  [Test157t/Prima-LelantaclesV4-7b-16k](https://huggingface.co/Test157t/Prima-LelantaclesV4-7b-16k)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: dare_ties
base_model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
  normalize: true
models:
  - model: Test157t/Pasta-Lake-7b
    parameters:
      weight: 1
  - model: Test157t/Prima-LelantaclesV4-7b-16k
    parameters:
      weight: 1
dtype: float16

```