roleplaiapp commited on
Commit
b3ea8e5
·
verified ·
1 Parent(s): 7ddaaa9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 8-bit
6
+ - Q8_0
7
+ - alpaca
8
+ - deepseek
9
+ - distill
10
+ - finetuned
11
+ - gguf
12
+ - llama-cpp
13
+ - text-generation
14
+ ---
15
+
16
+ # roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-Q8_0-GGUF
17
+
18
+ **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Alpaca-FineTuned-Q8_0-GGUF`
19
+ **Original Model:** `DeepSeek-R1-Distill-Alpaca-FineTuned`
20
+ **Quantized File:** `DeepSeek-R1-Distill-Alpaca-FineTuned.Q8_0.gguf`
21
+ **Quantization:** `GGUF`
22
+ **Quantization Method:** `Q8_0`
23
+
24
+ ## Overview
25
+ This is a GGUF Q8_0 quantized version of DeepSeek-R1-Distill-Alpaca-FineTuned
26
+ ## Quantization By
27
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
28
+ I hope the community finds these quantizations useful.
29
+
30
+ Andrew Webby @ [RolePlai](https://roleplai.app/).