roleplaiapp commited on
Commit
6ad021e
·
verified ·
1 Parent(s): b208f83

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license_name: qwen-research
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - en
6
+ base_model:
7
+ - Qwen/Qwen2.5-Coder-3B-Instruct
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ tags:
11
+ - llama-cpp
12
+ - Dria-Agent-a-3B
13
+ - gguf
14
+ - Q4_K_M
15
+ - 3B
16
+ - 4-bit
17
+ - Dria-Agent
18
+ - llama-cpp
19
+ - driaforall
20
+ - code
21
+ - math
22
+ - chat
23
+ - roleplay
24
+ - text-generation
25
+ - safetensors
26
+ - nlp
27
+ - code
28
+ ---
29
+
30
+ # roleplaiapp/Dria-Agent-a-3B-Q4_K_M-GGUF
31
+
32
+ **Repo:** `roleplaiapp/Dria-Agent-a-3B-Q4_K_M-GGUF`
33
+ **Original Model:** `Dria-Agent-a-3B`
34
+ **Organization:** `driaforall`
35
+ **Quantized File:** `dria-agent-a-3b-q4_k_m.gguf`
36
+ **Quantization:** `GGUF`
37
+ **Quantization Method:** `Q4_K_M`
38
+ **Use Imatrix:** `False`
39
+ **Split Model:** `False`
40
+
41
+ ## Overview
42
+ This is an GGUF Q4_K_M quantized version of [Dria-Agent-a-3B](https://huggingface.co/driaforall/Dria-Agent-a-3B).
43
+
44
+ ## Quantization By
45
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
46
+ I hope the community finds these quantizations useful.
47
+
48
+ Andrew Webby @ [RolePlai](https://roleplai.app/)