Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Lamarck-14B-v0.7-Fusion-GGUF
like
1
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
conversational
Model card
Files
Files and versions
Community
1
Use this model
refs/pr/1
Lamarck-14B-v0.7-Fusion-GGUF
1 contributor
History:
7 commits
MaziyarPanahi
7b1b1ea9c6ff7ef6efee15f86b42473d853a57ffa0da0a75ca3d0f169821932a
c0388b5
verified
2 days ago
.gitattributes
1.95 kB
7b1b1ea9c6ff7ef6efee15f86b42473d853a57ffa0da0a75ca3d0f169821932a
2 days ago
Lamarck-14B-v0.7-Fusion-GGUF_imatrix.dat
8.56 MB
LFS
7b1b1ea9c6ff7ef6efee15f86b42473d853a57ffa0da0a75ca3d0f169821932a
2 days ago
Lamarck-14B-v0.7-Fusion.Q5_K_M.gguf
10.5 GB
LFS
3bbcd7ad99f121d45423f0c3f1492c656c439ed5f4a8a15f9aa347e1db6cc417
2 days ago
Lamarck-14B-v0.7-Fusion.Q5_K_S.gguf
10.3 GB
LFS
f09ad983f5476c60d368410b32433e5a0140571a80e41824dc6229053e76696f
2 days ago
Lamarck-14B-v0.7-Fusion.Q6_K.gguf
12.1 GB
LFS
de2a79aeea826aa2b0807cd5657ef26ea6b2b7d82ab512136ddce331e6290adc
2 days ago
Lamarck-14B-v0.7-Fusion.Q8_0.gguf
15.7 GB
LFS
64d48c00eb678eb275dfe1455bdafa4b1d76b8369a8421a0fed196c4c516a922
2 days ago
Lamarck-14B-v0.7-Fusion.fp16.gguf
29.5 GB
LFS
23ec98259838d08513ffe564f038d8f48111bdc7e213d5413cf9fc8236e506c2
2 days ago
README.md
3.06 kB
7b1b1ea9c6ff7ef6efee15f86b42473d853a57ffa0da0a75ca3d0f169821932a
2 days ago