roleplaiapp commited on
Commit
a34f304
·
verified ·
1 Parent(s): a9225d9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - code
4
+ tags:
5
+ - llama-cpp
6
+ - Codestral-22B-v0.1
7
+ - gguf
8
+ - Q4_K_S
9
+ - 22B
10
+ - 4-bit
11
+ - Codestral
12
+ - llama-cpp
13
+ - mistralai
14
+ - code
15
+ - math
16
+ - chat
17
+ - roleplay
18
+ - text-generation
19
+ - safetensors
20
+ - nlp
21
+ - code
22
+ inference: false
23
+ license_name: mnpl
24
+ license_link: https://mistral.ai/licences/MNPL-0.1.md
25
+ extra_gated_description: If you want to learn more about how we process your personal
26
+ data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
27
+ base_model: mistralai/Codestral-22B-v0.1
28
+ library_name: transformers
29
+ pipeline_tag: text-generation
30
+ ---
31
+
32
+ # roleplaiapp/Codestral-22B-v0.1-Q4_K_S-GGUF
33
+
34
+ **Repo:** `roleplaiapp/Codestral-22B-v0.1-Q4_K_S-GGUF`
35
+ **Original Model:** `Codestral-22B-v0.1`
36
+ **Organization:** `mistralai`
37
+ **Quantized File:** `codestral-22b-v0.1-q4_k_s.gguf`
38
+ **Quantization:** `GGUF`
39
+ **Quantization Method:** `Q4_K_S`
40
+ **Use Imatrix:** `False`
41
+ **Split Model:** `False`
42
+
43
+ ## Overview
44
+ This is an GGUF Q4_K_S quantized version of [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1).
45
+
46
+ ## Quantization By
47
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
48
+ I hope the community finds these quantizations useful.
49
+
50
+ Andrew Webby @ [RolePlai](https://roleplai.app/)