roleplaiapp commited on
Commit
3f12f6c
·
verified ·
1 Parent(s): 2b9e078

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 70b
6
+ - IQ4_XS
7
+ - deepseek
8
+ - distill
9
+ - gguf
10
+ - iq4
11
+ - llama
12
+ - llama-cpp
13
+ - text-generation
14
+ - uncensored
15
+ ---
16
+
17
+ # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-IQ4_XS-GGUF
18
+
19
+ **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Uncensored-v2-IQ4_XS-GGUF`
20
+ **Original Model:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2`
21
+ **Quantized File:** `DeepSeek-R1-Distill-Llama-70B-Uncensored-v2.IQ4_XS.gguf`
22
+ **Quantization:** `GGUF`
23
+ **Quantization Method:** `IQ4_XS`
24
+
25
+ ## Overview
26
+ This is a GGUF IQ4_XS quantized version of DeepSeek-R1-Distill-Llama-70B-Uncensored-v2
27
+ ## Quantization By
28
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
29
+ I hope the community finds these quantizations useful.
30
+
31
+ Andrew Webby @ [RolePlai](https://roleplai.app/).