GGUF
maddes8cht commited on
Commit
4cc9a44
•
1 Parent(s): 92e0909

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -18,9 +18,12 @@ These will contain increasingly more content to help find the best models for a
18
  # falcon-40b - GGUF
19
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
20
  - Original model: [falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
 
21
  These are gguf quantized models of the riginal Falcon 40B Model by tiiuae.
22
  Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
23
 
 
 
24
  # About GGUF format
25
 
26
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
@@ -44,6 +47,7 @@ So, if possible, use K-quants.
44
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
45
 
46
 
 
47
  # Original Model Card:
48
  # 🚀 Falcon-40B
49
 
 
18
  # falcon-40b - GGUF
19
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
20
  - Original model: [falcon-40b](https://huggingface.co/tiiuae/falcon-40b)
21
+
22
  These are gguf quantized models of the riginal Falcon 40B Model by tiiuae.
23
  Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
24
 
25
+
26
+
27
  # About GGUF format
28
 
29
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
 
47
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
48
 
49
 
50
+
51
  # Original Model Card:
52
  # 🚀 Falcon-40B
53