File size: 1,209 Bytes
c47b800
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29a06ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: unknown
library_name: transformers
pipeline_tag: text-generation
---

# Deepseek-V2-Chat-GGUF

Quantizised from [https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat)

Using llama.cpp fork: [https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2](https://github.com/fairydreaming/llama.cpp/tree/deepseek-v2)

# Warning: This will not work unless you compile llama.cpp from the repo provided!

# How to use:

- Find the relevant directory
- Download all files
- Run merge.py
- Merged GGUF should appear

# Quants:
- bf16 (finished, uploading) [size: 439gb]
- q8_0 (after q2_k) [estimated size: 233.27gb]
- q4_k_m (uploading) [size: 132gb]
- q2_k (generating) [size: ~65gb]
- q3_k_s (low priority) [estimated size: 96.05gb]

Note: the bf16 GGUF does not have some DeepSeek v2 specific parameters, will look into adding them

Please use commit 039896407afd40e54321d47c5063c46a52da3e01, otherwise use these metadata KV overrides:
```
deepseek2.attention.q_lora_rank=int:1536
deepseek2.attention.kv_lora_rank=int:512
deepseek2.expert_shared_count=int:2
deepseek2.expert_feed_forward_length=int:1536
deepseek2.leading_dense_block_count=int:1
```