DavidAU commited on
Commit
62140ae
·
verified ·
1 Parent(s): 6957916

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -193,6 +193,33 @@ Here is the standard LLAMA3 template:
193
 
194
  It is also known, that the "Command-R" template will work too, and will result in radically different prose/output.
195
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
  <B>Settings / Known Issue(s) and Fix(es):</b>
197
 
198
  Stable version fixed all known issues from V1.
 
193
 
194
  It is also known, that the "Command-R" template will work too, and will result in radically different prose/output.
195
 
196
+ <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
197
+
198
+ In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
199
+
200
+ Set the "Smoothing_factor" to 1.5 to 2.5
201
+
202
+ : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
203
+
204
+ : in text-generation-webui -> parameters -> lower right.
205
+
206
+ : In Silly Tavern this is called: "Smoothing"
207
+
208
+
209
+ NOTE: For "text-generation-webui"
210
+
211
+ -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
212
+
213
+ Source versions (and config files) of my models are here:
214
+
215
+ https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
216
+
217
+ OTHER OPTIONS:
218
+
219
+ - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
220
+
221
+ - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
222
+
223
  <B>Settings / Known Issue(s) and Fix(es):</b>
224
 
225
  Stable version fixed all known issues from V1.