File size: 1,365 Bytes
c5b533d
32f2e0b
 
 
 
 
 
c5b533d
 
 
32f2e0b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
tags:
- roleplay
- conversational
language:
- en
---
# Teleut 7b RP
![image/png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/-yOYQdx9p3TjHLSq2RrRf.png)

A roleplay-focused LoRA finetune of Mistral Nemo Instruct. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush).

## Dataset
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad.

## Recommended Settings
Chat template: Mistral v3-Tekken  
Recommended samplers (not the be-all-end-all, try some on your own!):
- Temp 1.25 / MinP 0.1
- Temp 1.03 / TopK 200 / MinP 0.05 / TopA 0.2

## Hyperparams
### General
- Epochs = 2
- LR = 6e-5
- LR Scheduler = Cosine
- Optimizer = Paged AdamW 8bit
- Effective batch size = 12
### LoRA
- Rank = 16
- Alpha = 32
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush))

## Credits
Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;)  
Big thanks to all Allura members, especially Toasty, for testing and emotional support ilya /platonic  
NO thanks to Infermatic. They suck at hosting models