roleplaiapp commited on
Commit
2b2e9e9
·
verified ·
1 Parent(s): eb7bee3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 2-bit
6
+ - Q2_K
7
+ - deepsauerhuatuoskywork
8
+ - gguf
9
+ - llama
10
+ - llama-cpp
11
+ - text-generation
12
+ ---
13
+
14
+ # roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-Q2_K-GGUF
15
+
16
+ **Repo:** `roleplaiapp/DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B-Q2_K-GGUF`
17
+ **Original Model:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B`
18
+ **Quantized File:** `DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B.Q2_K.gguf`
19
+ **Quantization:** `GGUF`
20
+ **Quantization Method:** `Q2_K`
21
+
22
+ ## Overview
23
+ This is a GGUF Q2_K quantized version of DeepSauerHuatuoSkywork-R1-o1-Llama-3.1-8B
24
+ ## Quantization By
25
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
26
+ I hope the community finds these quantizations useful.
27
+
28
+ Andrew Webby @ [RolePlai](https://roleplai.app/).