Text Generation
GGUF
English
mixture of experts
Mixture of Experts
4x7B
mistral MOE
uncensored
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
mergekit
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -35,8 +35,6 @@ tags:
|
|
35 |
pipeline_tag: text-generation
|
36 |
---
|
37 |
|
38 |
-
(2 large examples below (1,2,3 and 4 experts output shown per example))
|
39 |
-
|
40 |
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. HORROR. Swearing. UNCENSORED... humor, romance, fun. </B>
|
41 |
|
42 |
<h2>Mistral-MOE-4X7B-Dark-MultiVerse-24B-GGUF</h2>
|
@@ -118,6 +116,9 @@ You can set the number of experts in LMStudio (https://lmstudio.ai) at the "load
|
|
118 |
|
119 |
For Text-Generation-Webui (https://github.com/oobabooga/text-generation-webui) you set the number of experts at the loading screen page.
|
120 |
|
|
|
|
|
|
|
121 |
For server.exe / Llama-server.exe (Llamacpp - https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md )
|
122 |
add the following to the command line to start the "llamacpp server" (CLI):
|
123 |
|
|
|
35 |
pipeline_tag: text-generation
|
36 |
---
|
37 |
|
|
|
|
|
38 |
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. HORROR. Swearing. UNCENSORED... humor, romance, fun. </B>
|
39 |
|
40 |
<h2>Mistral-MOE-4X7B-Dark-MultiVerse-24B-GGUF</h2>
|
|
|
116 |
|
117 |
For Text-Generation-Webui (https://github.com/oobabooga/text-generation-webui) you set the number of experts at the loading screen page.
|
118 |
|
119 |
+
For KolboldCPP (https://github.com/LostRuins/koboldcpp) Version 1.8+ , on the load screen, click on "TOKENS",
|
120 |
+
you can set experts on this page, and the launch the model.
|
121 |
+
|
122 |
For server.exe / Llama-server.exe (Llamacpp - https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md )
|
123 |
add the following to the command line to start the "llamacpp server" (CLI):
|
124 |
|