Doctor-Shotgun commited on
Commit
3c5a32a
·
1 Parent(s): 9530d8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -9
README.md CHANGED
@@ -3,32 +3,44 @@ license: apache-2.0
3
  base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: pippa-lora
8
  results: []
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
-
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
- # pippa-lora
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.3494
20
 
21
  ## Model description
22
 
23
- More information needed
24
 
25
  ## Intended uses & limitations
26
 
27
- More information needed
 
 
28
 
29
  ## Training and evaluation data
30
 
31
- More information needed
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Training procedure
34
 
@@ -109,4 +121,4 @@ The following hyperparameters were used during training:
109
  - Transformers 4.34.0.dev0
110
  - Pytorch 2.0.1+cu118
111
  - Datasets 2.14.5
112
- - Tokenizers 0.14.0
 
3
  base_model: mistralai/Mistral-7B-v0.1
4
  tags:
5
  - generated_from_trainer
6
+ - not-for-all-audiences
7
  model-index:
8
  - name: pippa-lora
9
  results: []
10
+ datasets:
11
+ - PygmalionAI/PIPPA
12
  ---
13
 
 
 
 
14
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
15
+ # mistral-v0.1-7b-pippa-metharme-lora
16
 
17
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the PIPPA dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.3494
20
 
21
  ## Model description
22
 
23
+ 8-bit lora trained on the [PygmalionAI/PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) dataset using axolotl.
24
 
25
  ## Intended uses & limitations
26
 
27
+ PIPPA consists of just a little more than 1 million lines of dialogue spread out over 26,000 conversations between users of the popular chatbot website "Character.AI" and its large language model, obtained through a large community effort taking place over the course of several months. Tallying shows that over 1,000 unique personas simulating both real and fictional characters are represented within the dataset, allowing PIPPA and LLMs fine-tuned on it to adapt to many different roleplay domains.
28
+
29
+ ⚠️ CAUTION: PIPPA contains conversations, themes and scenarios which can be considered "not safe for work" (NSFW) and/or heavily disturbing in nature. Models trained purely with PIPPA may have the tendency to generate X-rated output. You have been warned.
30
 
31
  ## Training and evaluation data
32
 
33
+ [PygmalionAI/PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA)
34
+ ```
35
+ @misc{gosling2023pippa,
36
+ title={PIPPA: A Partially Synthetic Conversational Dataset},
37
+ author={Tear Gosling and Alpin Dale and Yinhe Zheng},
38
+ year={2023},
39
+ eprint={2308.05884},
40
+ archivePrefix={arXiv},
41
+ primaryClass={cs.CL}
42
+ }
43
+ ```
44
 
45
  ## Training procedure
46
 
 
121
  - Transformers 4.34.0.dev0
122
  - Pytorch 2.0.1+cu118
123
  - Datasets 2.14.5
124
+ - Tokenizers 0.14.0