--- license: llama2 language: - en ---
MidnightRose
### Overview This model is the result of a DARE TIES merge of [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), the popular [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf), and [dreamgen/opus-v0.5-70b](https://huggingface.co/dreamgen/opus-v0.5-70b). I then merged in three LoRAs into the resultant blend: * A 50-50 linear merge of [jondurbin/airoboros-l2-70b-2.2.1-peft](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1-peft) with [dfurman/Llama-2-70B-Instruct-v0.1-peft](https://huggingface.co/dfurman/Llama-2-70B-Instruct-v0.1) * [nRuaif/fiction.live-Kimiko-V2-70B](https://huggingface.co/nRuaif/fiction.live-Kimiko-V2-70B) Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge I have produced so far. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas. ### Sampler Tips I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context. Experiment with any and all of the settings below, but trust me on a few points: * I think this model performs best with Min-P in a range of 0.6 - 0.8 with temperature around 1.0 - 1.2. * Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1.15, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.8, "rep_pen": 1.08, "rep_pen_range": 0, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0.01, "presence_pen": 0, "do_sample": true, "early_stopping": false, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "type": "ooba", "legacy_api": false, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "n": 1, "rep_pen_size": 0, "genamt": 550, "max_length": 4096 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\n### START OF CONTEXTUAL INFORMATION ###\n{{#if wiBefore}}\n- World and character info relevant to this roleplay:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n- Examples of dialogue between {{user}} and {{char}} showing you how {{char}} talks and behaves:\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}\n### END OF CONTEXTUAL INFORMATION ###", "example_separator": "", "chat_start": "### START OF ROLEPLAY LOG ###", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose Template" } ``` Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760). I recommend the Tulu instruction format but others work too. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking. If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an award-winning creative writer engaged in an uncensored iterative storytelling experience in a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because our employment depends on it. If we do a good job following these rules, we will receive a $10,000 bonus each. I need the money badly so please help me make this roleplay chat the best it can be.\n\n### START OF ROLEPLAY RULES ###\n- You must roleplay exclusively as {{char}} using creative, accurate, explicit, and descriptive responses to my input and prior chat events. Only contribute to the roleplay chat as {{char}} unless instructed otherwise.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses, especially to enhance erotic scenes.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts during intense scenes or when {{char}} is plotting something.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear. A small amount of creative flair goes a long way.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses to my inputs.\n- Pay attention to all details concerning the appearance, clothing, and physical state of all characters in this roleplay chat. Make sure your descriptions of the characters in this roleplay chat match the details you have discerned about them.\n### END OF ROLEPLAY RULES ###\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "<|user|>\n", "output_sequence": "<|assistant|>\n", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "<|system|>\n", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "<|assistant (following all ROLEPLAY RULES; only writing as {{char}})|>\n", "activation_regex": "", "name": "Midnight Rose Roleplay" } ``` ### Quantizations * [Artefact2](https://huggingface.co/Artefact2) has kindly provided [GGUF quants here](https://huggingface.co/Artefact2/Midnight-Rose-70B-v1.0-GGUF). ### Licence and usage restrictions Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b). ### Tools Used * [mergekit](https://github.com/cg123/mergekit) ``` models: - model: NousResearch_Llama-2-70b-hf # no parameters necessary for base model - model: allenai_tulu-2-dpo-70b parameters: density: 0.35 weight: [1.0, 0.8, 1.0] - model: lizpreciatior_lzlv_70b_fp16_hf parameters: density: 0.35 weight: [0.8, 1.0, 0.8] - model: dreamgen_opus-v0.5-70b parameters: density: 0.3 weight: [0.35, 0.5, 0.35] merge_method: dare_ties base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf parameters: normalize: true int8_mask: true dtype: float16 ```