Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
YokaiKoibito
/
falcon-40b-GGUF
like
9
GGUF
Transformers
falcon
text-generation-inference
Inference Endpoints
License:
other
Model card
Files
Files and versions
Community
1
Deploy
Use this model
main
falcon-40b-GGUF
2 contributors
History:
9 commits
YokaiKoibito
Add f16 as split file due to 50GB limit
92d70cb
about 1 year ago
.gitattributes
Safe
1.61 kB
Quantized files
about 1 year ago
README.md
Safe
2.93 kB
Update README.md
about 1 year ago
falcon-40b-Q2_K.gguf
Safe
17.4 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q3_K_L.gguf
Safe
21.6 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q3_K_M.gguf
Safe
20.1 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q3_K_S.gguf
Safe
18.3 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q4_K_M.gguf
Safe
25.5 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q4_K_S.gguf
Safe
23.8 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q5_K_M.gguf
Safe
30.6 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q5_K_S.gguf
Safe
29 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q6_K.gguf
Safe
34.5 GB
LFS
Quantized files
about 1 year ago
falcon-40b-Q8_0.gguf
Safe
44.5 GB
LFS
Quantized files
about 1 year ago
falcon-40b-f16.gguf-split-a
Safe
49.4 GB
LFS
Add f16 as split file due to 50GB limit
about 1 year ago
falcon-40b-f16.gguf-split-b
Safe
34.3 GB
LFS
Add f16 as split file due to 50GB limit
about 1 year ago