Llama-PLLuM-70B-chat GGUF Quantizations by Nondzu
DISCLAIMER: This is a quantized version of an existing model Llama-PLLuM-70B-chat. I am not the author of the original model. I am only hosting the quantized models. I do not take any responsibility for the models.
This repository contains GGUF quantized versions of the Llama-PLLuM-70B-chat model. All quantizations were performed using the llama.cpp (release b4765). These quantized models can be run in LM Studio or any other llama.cpp–based project.
Prompt Format
Use the following prompt structure:
???
Available Files
Below is a list of available quantized model files along with their quantization type, file size, whether the file is split, and a short description.
Filename | Quant Type | File Size | Split | Description |
---|---|---|---|---|
Llama-PLLuM-70B-chat-Q2_K.gguf | Q2_K | 25 GB | No | Very low quality but surprisingly usable. |
Llama-PLLuM-70B-chat-Q3_K.gguf | Q3_K | 32 GB | No | Low quality, suitable for setups with very limited RAM. |
Llama-PLLuM-70B-chat-Q3_K_L.gguf | Q3_K_L | 35 GB | No | High quality; recommended for quality-focused usage. |
Llama-PLLuM-70B-chat-Q3_K_M.gguf | Q3_K_M | 32 GB | No | Very high quality, near perfect output – recommended. |
Llama-PLLuM-70B-chat-Q3_K_S.gguf | Q3_K_S | 29 GB | No | Moderate quality with improved space efficiency. |
Llama-PLLuM-70B-chat-Q4_K.gguf | Q4_K | 40 GB | No | Good quality for standard use. |
Llama-PLLuM-70B-chat-Q4_K_M.gguf | Q4_K_M | 40 GB | No | Default quality for most use cases – recommended. |
Llama-PLLuM-70B-chat-Q4_K_S.gguf | Q4_K_S | 38 GB | No | Slightly lower quality with enhanced space savings – recommended when size is a priority. |
Llama-PLLuM-70B-chat-Q5_0.gguf | Q5_0 | 46 GB | No | Extremely high quality – the maximum quant available. |
Llama-PLLuM-70B-chat-Q5_K.gguf | Q5_K | 47 GB | No | Very high quality – recommended for demanding use cases. |
Llama-PLLuM-70B-chat-Q5_K_M.gguf | Q5_K_M | 47 GB | No | High quality – recommended. |
Llama-PLLuM-70B-chat-Q5_K_S.gguf | Q5_K_S | 46 GB | No | High quality, offered as an alternative with minimal quality loss. |
Llama-PLLuM-70B-chat-Q4_0.gguf | Q4_0 | 38 GB | No | Legacy format offering online repacking for ARM/AVX CPU inference. |
Llama-PLLuM-70B-chat-Q6_K.gguf | Q6_K | 54 GB | Yes | Very high quality with quantized embed/output weights. Split into 2 parts due to file size. |
• Part 1: Q6_K-00001-of-00002.gguf (37 GB) | ||||
• Part 2: Q6_K-00002-of-00002.gguf (18 GB) | ||||
Llama-PLLuM-70B-chat-Q8_0.gguf | Q8_0 | 70 GB | Yes | Maximum quality quantization. Available either as a single file or split into 2 parts. |
• Part 1: Q8_0.gguf-00001-of-00002.gguf (37 GB) | ||||
• Part 2: Q8_0.gguf-00002-of-00002.gguf (34 GB) |
*Files marked as "split" must be downloaded in full (all parts) to obtain the complete quantized model.
Downloading Using Hugging Face CLI
Click to view download instructions
First, ensure you have the Hugging Face CLI installed:
pip install -U "huggingface_hub[cli]"
Then, target a specific file to download:
huggingface-cli download Nondzu/Llama-PLLuM-70B-chat-GGUF --include "Llama-PLLuM-70B-chat-Q4_K_M.gguf" --local-dir ./
For files larger than 50 GB that are split into multiple parts, use a wildcard to download all parts at once:
huggingface-cli download Nondzu/Llama-PLLuM-70B-chat-GGUF --include "Llama-PLLuM-70B-chat-Q8_0/*" --local-dir ./
You can specify a new local directory (e.g., Llama-PLLuM-70B-chat-Q8_0
) or download them directly into the current directory (./
).
- Downloads last month
- 1,741
Model tree for Nondzu/Llama-PLLuM-70B-chat-GGUF
Base model
CYFRAGOVPL/Llama-PLLuM-70B-chat