File size: 2,140 Bytes
ed8e408
 
 
 
 
 
 
 
 
 
 
 
 
 
14ad3ba
ed8e408
55d0208
 
05478e0
55d0208
35f8140
 
 
05478e0
55d0208
ed8e408
 
396dae2
ed8e408
 
 
 
14ad3ba
ed8e408
 
 
 
bd04e66
ed8e408
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7f7df53
ed8e408
 
 
 
 
 
 
 
 
 
 
 
7441b5d
 
125f6f2
7441b5d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
base_model:
- akjindal53244/Llama-3.1-Storm-8B
- Sao10K/L3.1-8B-Niitama-v1.1
library_name: transformers
tags:
- merge
- llama
---

# Llama-3.1-Niitorm-8B

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/dq5jKo1eCV8qapmfF4h3V.png)

RP model, Niitama 1.1 as a base, nearswapped with one of the smartest 3.1 models "Storm", mostly abliterated.

## Thank you mradermacher for the quants!
* [GGUF](https://huggingface.co/mradermacher/L3.1-Niitorm-8B-t0.0001-GGUF)
* [GGUF imatrix](https://huggingface.co/mradermacher/L3.1-Niitorm-8B-t0.0001-i1-GGUF)

## Thank you QuantFactory for the quants!
* [GGUF](https://huggingface.co/QuantFactory/L3.1-Niitorm-8B-t0.0001-GGUF)

-------------------------------------------------------------------------------

## merge

This is a merge of pre-trained language models.

## Merge Details
### Merge Method

This model was merged using the <b>NEARSWAP t0.0001</b> merge algorithm.

### Models Merged

The following models were included in the merge:
* Base Model: [Sao10K/L3.1-8B-Niitama-v1.1](https://huggingface.co/Sao10K/L3.1-8B-Niitama-v1.1) + [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
* [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
        layer_range: [0, 32]
      - model: akjindal53244/Llama-3.1-Storm-8B
        layer_range: [0, 32]
merge_method: nearswap
base_model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
  t:
    - value: 0.0001
dtype: bfloat16
out_type: float16 #oops
```

# Prompt Template:
```bash
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

```
Credit to Alchemonaut.

Credit to woofwolfy for the idea.