File size: 10,148 Bytes
24f7c8a 72aa965 24f7c8a 6954e07 24f7c8a a5fd5d9 72aa965 24f7c8a 6954e07 24f7c8a 6954e07 24f7c8a 96fbb75 24f7c8a 6954e07 24f7c8a 6954e07 24f7c8a 6954e07 96fbb75 6954e07 24f7c8a 6954e07 24f7c8a 6954e07 24f7c8a 6954e07 24f7c8a 6954e07 24f7c8a 6954e07 72aa965 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 |
---
license: gemma
library_name: transformers
tags:
- gemma-2
base_model:
- anthracite-forge/magnum-v3-27b-kto-r3
- anthracite-forge/magnum-v3-27b-KTO-e1-r2
- anthracite-forge/magnum-v3-27b-KTO-e0.25-r1
- IntervitensInc/gemma-2-27b-chatml
pipeline_tag: text-generation
model-index:
- name: magnum-v3-27b-kto
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 74.88
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 56.36
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 92.35
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 80.24
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 73.61
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 75.97
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 71.35
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 68.8
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=anthracite-org/magnum-v3-27b-kto
name: Open Portuguese LLM Leaderboard
---

This is the 12th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is the result of multiple KTO runs on top of one SFT run, all of which are published on [anthracite-forge](https://huggingface.co/anthracite-forge).
## Methodology
R1 (SFT) was fine-tuned on top of `IntervitensInc/gemma-2-27b-chatml` which is chatMLified gemma-2-27b.
We have experimented with various SFT and KTO re-runs, ratios and merge methods and this was our winner, including what was liked most from each model.
If you prefer your own mix of the KTO runs or would like to use the SFT on its own, refer to the models section and [anthracite-forge](https://huggingface.co/anthracite-forge), some exl-quants are pre-included.
## Models
* [anthracite-forge/magnum-v3-27b-kto-r3](https://huggingface.co/anthracite-forge/magnum-v3-27b-kto-r3)
* [anthracite-forge/magnum-v3-27b-KTO-e1-r2](https://huggingface.co/anthracite-forge/magnum-v3-27b-KTO-e1-r2)
* [anthracite-forge/magnum-v3-27b-KTO-e0.25-r1](https://huggingface.co/anthracite-forge/magnum-v3-27b-KTO-e0.25-r1)
## Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
```py
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Magnum ChatML"
}
```
</details><br>
<details><summary>instruct template</summary>
```yaml
{
"system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"last_output_sequence": "",
"system_sequence": "<|im_start|>system\n",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": true,
"names_force_groups": true,
"activation_regex": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"system_same_as_user": false,
"last_system_sequence": "",
"name": "Magnum ChatML"
}
```
</details><br>
### Configuration
```yaml
base_model: IntervitensInc/gemma-2-27b-chatml
dtype: float32
merge_method: task_arithmetic
models:
- model: IntervitensInc/gemma-2-27b-chatml
- model: anthracite-forge/magnum-v3-27b-KTO-e0.25-r1
parameters:
weight: 0.5
- model: anthracite-forge/magnum-v3-27b-KTO-e1-r2
parameters:
weight: 0.1
- model: anthracite-forge/magnum-v3-27b-kto-r3
parameters:
weight: 0.4
```
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
r1 consisted of:
```
datasets:
- path: anthracite-org/stheno-filtered-v1.1
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: chatml
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
```
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
...
# Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/anthracite-org/magnum-v3-27b-kto) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
| Metric | Value |
|--------------------------|---------|
|Average |**73.14**|
|ENEM Challenge (No Images)| 74.88|
|BLUEX (No Images) | 64.67|
|OAB Exams | 56.36|
|Assin2 RTE | 92.35|
|Assin2 STS | 80.24|
|FaQuAD NLI | 73.61|
|HateBR Binary | 75.97|
|PT Hate Speech Binary | 71.35|
|tweetSentBR | 68.80|
|