Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
mlabonne/AlphaMonarch-7B
FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
SanjiWatsuki/Kunoichi-DPO-v2-7B
OmnicromsBrain/NeuralStar-7b-Lazy
conversational
Eval Results
text-generation-inference
Inference Endpoints
OmnicromsBrain
commited on
Commit
•
a76b177
1
Parent(s):
7a7b92f
Update README.md
Browse files
README.md
CHANGED
@@ -140,9 +140,10 @@ NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the followin
|
|
140 |
|
141 |
## ⚡ Quantized Models
|
142 |
|
143 |
-
|
144 |
|
145 |
**.GGUF** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-GGUF
|
|
|
146 |
|
147 |
Q4_K_M and Q5_K_M .gguf [**Here**](https://huggingface.co/OmnicromsBrain/NeuralStar_AlphaWriter_4x7b-GGUF) created with [mlabonne/Autogguf](https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu)
|
148 |
|
|
|
140 |
|
141 |
## ⚡ Quantized Models
|
142 |
|
143 |
+
Special thanks to MRadermacher for the Static and iMatrx quantized models
|
144 |
|
145 |
**.GGUF** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-GGUF
|
146 |
+
**iMatrix** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-i1-GGUF
|
147 |
|
148 |
Q4_K_M and Q5_K_M .gguf [**Here**](https://huggingface.co/OmnicromsBrain/NeuralStar_AlphaWriter_4x7b-GGUF) created with [mlabonne/Autogguf](https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu)
|
149 |
|