2Jyq/llm4decompile-9b-v2-GGUF

This model was converted to GGUF format from LLM4Binary/llm4decompile-9b-v2 using llama.cpp. Refer to the original model card for more details on the model.

Downloads last month
800
GGUF
Model size
8.83B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for 2Jyq/llm4decompile-9b-v2-GGUF

Quantized
(5)
this model