roleplaiapp/ReaderLM-v2-Q6_K-GGUF

Repo: roleplaiapp/ReaderLM-v2-Q6_K-GGUF
Original Model: ReaderLM-v2 Organization: jinaai-v2 Quantized File: readerlm-v2-q6_k.gguf Quantization: GGUF Quantization Method: Q6_K
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q6_K quantized version of ReaderLM-v2.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
9
GGUF
Model size
1.78B params
Architecture
qwen2

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for roleplaiapp/ReaderLM-v2-Q6_K-GGUF

Base model

jinaai/ReaderLM-v2
Quantized
(28)
this model