--- library_name: transformers pipeline_tag: text-generation tags: - 24b - 4x7b - 8-bit - Q8_0 - dark - enhanced32 - gguf - llama-cpp - mistral - moe - multiverse - text-generation - uncensored --- # roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-Q8_0-GGUF **Repo:** `roleplaiapp/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf-Q8_0-GGUF` **Original Model:** `Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf` **Quantized File:** `M-MOE-4X7B-Dark-MultiVerse-UC-E32-24B-D_AU-Q8_0.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q8_0` ## Overview This is a GGUF Q8_0 quantized version of Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf ## Quantization By I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/).