JosephusCheung
commited on
Commit
路
e6fac1b
1
Parent(s):
5d526af
Update README.md
Browse files
README.md
CHANGED
@@ -34,12 +34,10 @@ tags:
|
|
34 |
|
35 |
*Image drawn by GPT-4 DALL路E 3* TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...
|
36 |
|
37 |
-
# Please Stop Using WRONG unofficial quant models unless you know what you're doing
|
38 |
-
|
39 |
-
GPTQ quants require a good dataset for calibration, and the default C4 dataset is not capable.
|
40 |
-
|
41 |
**llama.cpp GGUF models**
|
42 |
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models to be reuploaded.
|
|
|
|
|
43 |
|
44 |
# Read Me:
|
45 |
|
|
|
34 |
|
35 |
*Image drawn by GPT-4 DALL路E 3* TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...
|
36 |
|
|
|
|
|
|
|
|
|
37 |
**llama.cpp GGUF models**
|
38 |
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models to be reuploaded.
|
39 |
+
Thanks TheBloke for GGUF versions: [https://huggingface.co/TheBloke/CausalLM-14B-GGUF](https://huggingface.co/TheBloke/CausalLM-14B-GGUF)
|
40 |
+
|
41 |
|
42 |
# Read Me:
|
43 |
|