FantasiaFoundry commited on
Commit
9c663aa
·
verified ·
1 Parent(s): c737bca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -22,6 +22,8 @@ tags:
22
  > Basically, make sure you're in the latest llama.cpp repo commit, then run the new `convert-hf-to-gguf-update.py` script inside the repo (you will need to provide a huggingface-read-token, and you need to have access to the Meta-Llama-3 repositories – [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) – to be sure, so fill the access request from right away to be able to fetch the necessary files), afterwards you need to manually copy the config files from `llama.cpp\models\tokenizers\llama-bpe` into your downloaded **model** folder, replacing the existing ones. <br>
23
  > Try again and the conversion procress should work as expected.
24
  >
 
 
25
  > The is a new experimental script added, `gguf-imat-llama-3-lossless.py`, which performs the conversions directly from a BF16 GGUF to hopefully generate lossless Llama-3 model quantizations avoiding the recent talked about issues on that topic, it is more resource intensive and will generate more writes in the drive as there's a whole additional conversion step that isn't performed in the previous version. This should be only be necessary until we have GPU support for BF16 to run directly without conversion.
26
 
27
 
 
22
  > Basically, make sure you're in the latest llama.cpp repo commit, then run the new `convert-hf-to-gguf-update.py` script inside the repo (you will need to provide a huggingface-read-token, and you need to have access to the Meta-Llama-3 repositories – [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and [here](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) – to be sure, so fill the access request from right away to be able to fetch the necessary files), afterwards you need to manually copy the config files from `llama.cpp\models\tokenizers\llama-bpe` into your downloaded **model** folder, replacing the existing ones. <br>
23
  > Try again and the conversion procress should work as expected.
24
  >
25
+ > **Experimental:**
26
+ >
27
  > The is a new experimental script added, `gguf-imat-llama-3-lossless.py`, which performs the conversions directly from a BF16 GGUF to hopefully generate lossless Llama-3 model quantizations avoiding the recent talked about issues on that topic, it is more resource intensive and will generate more writes in the drive as there's a whole additional conversion step that isn't performed in the previous version. This should be only be necessary until we have GPU support for BF16 to run directly without conversion.
28
 
29