roleplaiapp commited on
Commit
65601b4
·
verified ·
1 Parent(s): 8389e26

create readme

Browse files
Files changed (1) hide show
  1. README.ms +52 -0
README.ms ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - en
5
+ - fr
6
+ - it
7
+ - pt
8
+ - hi
9
+ - es
10
+ - th
11
+ - de
12
+ base_model:
13
+ - meta-llama/Llama-3.1-70B
14
+ tags:
15
+ - llama-cpp
16
+ - Llama-3.3-70B-Instruct
17
+ - gguf
18
+ - Q4_0
19
+ - llama-cpp
20
+ - gguf
21
+ - meta-llama
22
+ - code
23
+ - math
24
+ - chat
25
+ - roleplay
26
+ - text-generation
27
+ - safetensors
28
+ - nlp
29
+ - code
30
+ pipeline_tag: text-generation
31
+ ---
32
+
33
+ # roleplaiapp/Llama-3.3-70B-Instruct-Q3_K_L-GGUF
34
+
35
+ **Repo:** `roleplaiapp/Llama-3.3-70B-Instruct-Q4_0-GGUF`
36
+ **Original Model:** `Llama-3.3-70B-Instruct`
37
+ **Organization:** `meta-llama`
38
+ **Quantized File:** `llama-3.3-70b-instruct-q3_k_l.gguf`
39
+ **Quantization:** `GGUF`
40
+ **Quantization Method:** `Q4_0`
41
+ **Use Imatrix:** `False`
42
+ **Imatrix Quant Method:** `IQ4_NL`
43
+ **Split Model:** `False`
44
+
45
+ ## Overview
46
+ This is an GGUF Q4_0 quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct).
47
+
48
+ ## Quantization By
49
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
50
+ I hope the community finds these quantizations useful.
51
+
52
+ Andrew Webby @ [RolePlai](https://roleplai.app/)