File size: 596 Bytes
c47b800
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: unknown
library_name: transformers
pipeline_tag: text-generation
---

# Deepseek-V2-Chat-GGUF

Quantizised from [https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat)

Using llama.cpp fork: [https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2](https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2)

# Warning: This will not work unless you compile llama.cpp from the repo provided!

# How to use:

- Find the relevant directory
- Download all files
- Run merge.py
- Merged GGUF should appear

# Quants:
- bf16
- q8_0