N8Programs
commited on
Commit
•
7371a57
1
Parent(s):
3e87bcf
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- N8Programs/CreativeGPT
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
---
|
9 |
+
|
10 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/647e98971a1fcad2fdc55e61/Fv6NrdPI0U6AZmnafez3T.png)
|
11 |
+
|
12 |
+
# Model Card for Coxcomb
|
13 |
+
|
14 |
+
A creative writing model, using the superb [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) as a base, finetuned on GPT-4 outputs to a diverse variety of prompts. It in no way competes with GPT-4 - it's quality of writing is below it, and it is primarily meant to be run in offline, local environments.
|
15 |
+
On creative writing benchmarks, it is consistently ranked higher than most other models - [it scores 72.37](https://eqbench.com/creative_writing.html), beating goliath-120b, yi chat, and mistral-large.
|
16 |
+
It is designed for **single-shot interactions**. You ask it to write a story, and it does. It is NOT designed for chat purposes, roleplay, or follow-up questions.
|
17 |
+
|
18 |
+
## Model Details
|
19 |
+
|
20 |
+
Trained w/ a 40M parameter lora on [N8Programs/CreativeGPT](https://huggingface.co/datasets/N8Programs/CreativeGPT) for 3 epochs. Overfit slightly (for much better benchmark results).
|
21 |
+
|
22 |
+
|
23 |
+
### Model Description
|
24 |
+
|
25 |
+
- **Developed by:** N8Programs
|
26 |
+
- **Model type:** Mistral
|
27 |
+
- **Language(s) (NLP):** English
|
28 |
+
- **License:** Apache 2.0
|
29 |
+
- **Finetuned from model:** [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
|
30 |
+
|
31 |
+
## Uses
|
32 |
+
|
33 |
+
Bot trained on NSFW (sexual or violent) content but will generate it when asked - it has not been trained with refusals. If you wish to ADD refusal behavior in, further tuning or filtering will be neccessary.
|
34 |
+
|
35 |
+
### Direct Use
|
36 |
+
|
37 |
+
GGUFs available at [Coxcomb-GGUF](https://huggingface.co/N8Programs/Coxcomb-GGUF)
|
38 |
+
Should work with transformers (not officially tested).
|
39 |
+
|
40 |
+
## Bias, Risks, and Limitations
|
41 |
+
|
42 |
+
Tends to generate stories with happy, trite endings. Most LLMs do this. It's very hard to get them not to.
|
43 |
+
|
44 |
+
## Training Details
|
45 |
+
|
46 |
+
Trained on a single M3 Max in roughly 12 hours.
|