File size: 2,632 Bytes
577e944
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7fbfc5d
577e944
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
language:
- en
- zh
pipeline_tag: text-generation
tags:
- chat
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/A9n8EJBDQziJWnXhOYeEE.png)

# magnum-32b-v2 - EXL2 4.7bpw rpcal mk2

This is a 4.7bpw EXL2 quant of [anthracite-org/magnum-32b-v2](https://huggingface.co/anthracite-org/magnum-32b-v2)

This quant was made using exllamav2-0.1.8 with [Bluemoon-light dataset](https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light) for RP.

I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine.

## Prompt Templates

Uses ChatML format like mentioned below.

### Original readme below

---

This is the third in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen1.5 32B](https://huggingface.co/Qwen/Qwen1.5-32B).

## Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:

```py
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```

## Credits
- Stheno dataset (filtered)
- [NobodyExistsOnTheInternet/claude_3.5s_single_turn_unslop_filtered](https://huggingface.co/datasets/NobodyExistsOnTheInternet/claude_3.5s_single_turn_unslop_filtered)
- [NobodyExistsOnTheInternet/PhiloGlanSharegpt](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PhiloGlanSharegpt)
- [NobodyExistsOnTheInternet/Magpie-Reasoning-Medium-Subset](https://huggingface.co/datasets/NobodyExistsOnTheInternet/Magpie-Reasoning-Medium-Subset)
- [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k)
- [Nopm/Opus_WritingStruct](https://huggingface.co/datasets/Nopm/Opus_WritingStruct)
- [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset)

This model has been a team effort, and the credits goes to all members of Anthracite.

## Training
The training was done for 2 epochs. We used 8x [NVIDIA H100 Tensor Core](https://www.nvidia.com/en-us/data-center/h100/) GPUs for the full-parameter fine-tuning of the model.

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

## Safety
...