Dolphin-2.9-llama3-8b-256k-GGUF

This is quantized version of cognitivecomputations/dolphin-2.9-llama3-8b-256k created using llama.cpp

Downloads last month
248
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for QuantFactory/dolphin-2.9-llama3-8b-256k-GGUF

Quantized
(5)
this model