Text Generation
GGUF
English
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
roleplaying
swearing
extreme swearing
rp
graphic horror
horror
nsfw
llama3
Not-For-All-Audiences
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -109,17 +109,33 @@ This model also does not show an "GPTisms" (NO happy ever after, NO morality pol
|
|
109 |
|
110 |
(see examples sections for different genres)
|
111 |
|
112 |
-
Because of the nature of this merge most attributes of each of the 3 models will be in this rebuilt
|
113 |
original 8B model where some of one or more of the model's features and/or strengths maybe reduced or overshadowed.
|
114 |
|
115 |
Please report any issue(s) and/or feedback via the "Community tab".
|
116 |
|
117 |
-
Please see the models used in this merge (links below in the "formula" section ) for more information on
|
118 |
-
what they "bring" to this merged 16.5B model.
|
119 |
-
|
120 |
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 8k / 8192.
|
121 |
However this can be extended using "rope" settings up to 32k.
|
122 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
It is also known, that the "Command-R" template will work too, and will result in radically different prose/output.
|
124 |
|
125 |
<B>Settings / Known Issue(s) and Fix(es):</b>
|
|
|
109 |
|
110 |
(see examples sections for different genres)
|
111 |
|
112 |
+
Because of the nature of this merge most attributes of each of the 3 models will be in this rebuilt 17.4B model as opposed to the
|
113 |
original 8B model where some of one or more of the model's features and/or strengths maybe reduced or overshadowed.
|
114 |
|
115 |
Please report any issue(s) and/or feedback via the "Community tab".
|
116 |
|
|
|
|
|
|
|
117 |
This is a LLAMA3 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 8k / 8192.
|
118 |
However this can be extended using "rope" settings up to 32k.
|
119 |
|
120 |
+
Here is the standard LLAMA3 template:
|
121 |
+
|
122 |
+
<PRE>
|
123 |
+
{
|
124 |
+
"name": "Llama 3",
|
125 |
+
"inference_params": {
|
126 |
+
"input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
|
127 |
+
"input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
|
128 |
+
"pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
|
129 |
+
"pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
|
130 |
+
"pre_prompt_suffix": "<|eot_id|>",
|
131 |
+
"antiprompt": [
|
132 |
+
"<|start_header_id|>",
|
133 |
+
"<|eot_id|>"
|
134 |
+
]
|
135 |
+
}
|
136 |
+
}
|
137 |
+
</PRE>
|
138 |
+
|
139 |
It is also known, that the "Command-R" template will work too, and will result in radically different prose/output.
|
140 |
|
141 |
<B>Settings / Known Issue(s) and Fix(es):</b>
|