Joseph717171 commited on
Commit
0afecbd
·
verified ·
1 Parent(s): 8ca6ee7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -2,4 +2,4 @@ Custom GGUF quants of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.c
2
 
3
  Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! 😋
4
 
5
- UPDATE: This repo now contains updated O.E.IQuants, which were quantized, using a new F32-imatrix, using llama.cpp version: 4067 (54ef9cfc). This particular version of llama.cpp made it so all K*Q mat_mul computations were done in F32 vs BF16, when using FA (Flash Attention). This change, plus the other very impactful prior change, which made all K*Q mat_muls be computed with F32 (float32) precision for CUDA-Enabled devices, has compoundedly enhanced the O.E.IQuants and has made it excitingly necessary for this update to be pushed. Cheers!
 
2
 
3
  Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! 😋
4
 
5
+ UPDATE: This repo now contains updated O.E.IQuants, which were quantized, using a new F32-imatrix, using llama.cpp version: 4658 (855cd073). This particular version of llama.cpp added support for Non-Contiguous RMS Norms. This has enhanced model coherence and further increased model creativity (from testing).