roleplaiapp commited on
Commit
4b88c81
·
verified ·
1 Parent(s): a33756f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-Coder-7B-Instruct
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - llama-cpp
11
+ - Dria-Agent-a-3B
12
+ - gguf
13
+ - Q3_K_M
14
+ - 7B
15
+ - 3-bit
16
+ - Dria-Agent
17
+ - llama-cpp
18
+ - driaforall
19
+ - code
20
+ - math
21
+ - chat
22
+ - roleplay
23
+ - text-generation
24
+ - safetensors
25
+ - nlp
26
+ - code
27
+ ---
28
+
29
+ # roleplaiapp/Dria-Agent-a-7B-Q3_K_M-GGUF
30
+
31
+ **Repo:** `roleplaiapp/Dria-Agent-a-7B-Q3_K_M-GGUF`
32
+ **Original Model:** `Dria-Agent-a-3B`
33
+ **Organization:** `driaforall`
34
+ **Quantized File:** `dria-agent-a-7b-q3_k_m.gguf`
35
+ **Quantization:** `GGUF`
36
+ **Quantization Method:** `Q3_K_M`
37
+ **Use Imatrix:** `False`
38
+ **Split Model:** `False`
39
+
40
+ ## Overview
41
+ This is an GGUF Q3_K_M quantized version of [Dria-Agent-a-3B](https://huggingface.co/driaforall/Dria-Agent-a-7B).
42
+
43
+ ## Quantization By
44
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
45
+ I hope the community finds these quantizations useful.
46
+
47
+ Andrew Webby @ [RolePlai](https://roleplai.app/)