File size: 6,056 Bytes
02d1662
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1bddb98
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
02d1662
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
base_model: kromeurus/L3.1-Aglow-Vulca-v0.1-8B
library_name: transformers
license: cc-by-nc-4.0
tags:
- mergekit
- merge
- roleplay
- RP
- storytelling
- llama-cpp
- gguf-my-repo
---

# Triangle104/L3.1-Aglow-Vulca-v0.1-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`kromeurus/L3.1-Aglow-Vulca-v0.1-8B`](https://huggingface.co/kromeurus/L3.1-Aglow-Vulca-v0.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/kromeurus/L3.1-Aglow-Vulca-v0.1-8B) for more details on the model.

---

Model Details & Recommended Settings
-

This is a story telling first model that is proficient in narrative driven RP. Does best with straightforward instructions. Any wishy-washy language will confuse it. As per usual with any of my models with Formax in it, it's pretty sensitive to instructs so choose your words wisely.

Once going though, it's able to generate detailed and human-ish outputs with lots of personality depending on the information given. Has a habit of matching the style and format of the input; style, spacing, grammar, etc. Can interweave details from the character persona, chat history, and user persona (if there is one) to create unique interactions and plot points. Leans more or less positive naturally but can be flipped if prompted correctly.

Being a Llama 3.1 model, it's still subject to the normal pros and cons of L3/L3.1 but I'd like to think I tamed some of it. Keep the temp on the lower end since there is a low chance it might freak out. If it does, swipe/regen the chat or delete the afflicted output and try again.

Rec. Settings:

Template: Llama 3
Token Count: 128k Max
Temperature: 1.2
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256

Merge Theory

Where to begin. The general though process was still the roughly same as the Ablaze, making one very smart model and another more creative focused model. This time, I merged Formax and RPmax in separately instead of doing one merge since they have different focuses.

'Apollobulk' is the smarts, having the storytelling capabilities from badger writer, instruct following from Formax (duh) and the smarts of Super Nova. Apollo 0.4 was use as an RP temper to keep the overall model aligned with RP. Apollo 2.0 wasn't used as it skewed the merge too far towards inconsistent narratives.

'Reshape' is the creative end, taking some inspo from the Ablaze's creative center. First created 'Darkened' as the main influence over the final writing style of Aglow. Poppy Moonfall C had the personality I was looking for but the smarts (though not important was still necessary) so the other three were added to round out it's overall capabilities while being very creative. Plopping that atop RPmax (For excellent unique RP interactions), BRAG (serious recall), and Natsumura (a great Storytelling/RP base) and model stock it, you get a really solid model on its own.

Slap the two components together in a simple gradient dare_linear merge and boom; this unit of an 8B model. As of writing and releasing this model, mergekit is fucked for me (one of its dependencies has broken L3 merging) so I can't test any other methods atm. If there is a better final merge method, I'll be uploading a v0.2 once the bug is fixed.

This time around, everything was done with DavidAU's High Quality method, merging with float32 at all steps. Made a significant difference in nuanced understanding of text.
Config

models:
    - model: Locutusque/Apollo-0.4-Llama-3.1-8B
    - model: maldv/badger-writer-llama-3-8b
    - model: ArliAI/Llama-3.1-8B-ArliAI-Formax-v1.0
base_model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: apollobulk


models:
    - model: v000000/L3-8B-Poppy-Moonfall-C
    - model: Casual-Autopsy/Jamet-L3-Stheno-BlackOasis-8B
    - model: SicariusSicariiStuff/Dusk_Rainbow
base_model: ResplendentAI/Rawr_Llama3_8B
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: darkened


models:
    - model: darkened
    - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
    - model: maximalists/BRAG-Llama-3.1-8b-v0.1
base_model: tohur/natsumura-storytelling-rp-1.0-llama-3.1-8b
parameters:
  int8_mask: true
merge_method: model_stock
dtype: float32
tokenizer_source: base
name: reshape


models: 
  - model: reshape
    parameters:
      weight: [0.1, 0.9]
  - model: apollobulk
    parameters:
      weight: [0.9, 0.1]
base_model: reshape
tokenizer_source: base
parameters:
  normalize: false
  int8_mask: true
merge_method: dare_linear
dtype: float32
name: vulca

---

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/L3.1-Aglow-Vulca-v0.1-8B-Q4_K_M-GGUF --hf-file l3.1-aglow-vulca-v0.1-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/L3.1-Aglow-Vulca-v0.1-8B-Q4_K_M-GGUF --hf-file l3.1-aglow-vulca-v0.1-8b-q4_k_m.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/L3.1-Aglow-Vulca-v0.1-8B-Q4_K_M-GGUF --hf-file l3.1-aglow-vulca-v0.1-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/L3.1-Aglow-Vulca-v0.1-8B-Q4_K_M-GGUF --hf-file l3.1-aglow-vulca-v0.1-8b-q4_k_m.gguf -c 2048
```