OwenArli commited on
Commit
58ea497
·
verified ·
1 Parent(s): 5d127b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ Qwen2.5-32B-ArliAI-RPMax-v1.3
5
+
6
+ =====================================
7
+ RPMax v1 Series Overview
8
+
9
+ v1.1 = 2B | 3.8B | 8B | 9B | 12B | 20B | 22B | 70B
10
+
11
+ v1.2 = 8B | 12B | 70B
12
+
13
+ v1.3 = 32B
14
+
15
+ RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
16
+
17
+ Many RPMax users mentioned that these models does not feel like any other RP models, having a different writing style and generally doesn't feel in-bred.
18
+
19
+ You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking
20
+
21
+ Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/
22
+ Model Description
23
+
24
+ Qwen2.5-32B-ArliAI-RPMax-v1.3 is a variant made from the Qwen2.5-32B-Instruct model.
25
+
26
+ Let us know what you think of the model! The different parameter versions are based on different models, so they might all behave slightly differently in their own way.
27
+
28
+ v1.3 updated models are trained with updated software and configs such as the updated transformers library that fixes the gradient checkpointing bug which should help the model learn better.
29
+ This version also uses RSLORA+ for training which helps the model learn even better.
30
+
31
+ Specs
32
+
33
+ Context Length: 128K
34
+ Parameters: 32B
35
+
36
+ Training Details
37
+
38
+ Sequence Length: 8192
39
+ Training Duration: Approximately 4 days on 2x3090Ti
40
+ Epochs: 1 epoch training for minimized repetition sickness
41
+ LORA: 64-rank 64-alpha, resulting in ~2% trainable weights
42
+ Learning Rate: 0.00001
43
+ Gradient accumulation: Very low 32 for better learning.
44
+
45
+ Quantization
46
+
47
+ The model is available in quantized formats:
48
+
49
+ FP16: https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
50
+ GGUF: https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3-GGUF
51
+
52
+ Suggested Prompt Format
53
+
54
+ ChatML Chat Format
55
+
56
+ <|im_start|>system
57
+ Provide some context and/or instructions to the model.
58
+ <|im_end|>
59
+ <|im_start|>user
60
+ The user’s message goes here
61
+ <|im_end|>
62
+ <|im_start|>assistant