roleplaiapp commited on
Commit
b864ad5
·
verified ·
1 Parent(s): 2bec098

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - 6-bit
6
+ - Q6_K
7
+ - arxivllama
8
+ - gguf
9
+ - llama-cpp
10
+ - text-generation
11
+ ---
12
+
13
+ # roleplaiapp/ArxivLlama-3.1-8B-Q6_K-GGUF
14
+
15
+ **Repo:** `roleplaiapp/ArxivLlama-3.1-8B-Q6_K-GGUF`
16
+ **Original Model:** `ArxivLlama-3.1-8B`
17
+ **Quantized File:** `ArxivLlama-3.1-8B.Q6_K.gguf`
18
+ **Quantization:** `GGUF`
19
+ **Quantization Method:** `Q6_K`
20
+
21
+ ## Overview
22
+ This is a GGUF Q6_K quantized version of ArxivLlama-3.1-8B
23
+ ## Quantization By
24
+ I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
25
+ I hope the community finds these quantizations useful.
26
+
27
+ Andrew Webby @ [RolePlai](https://roleplai.app/).