JosephusCheung commited on
Commit
34a4df0
·
1 Parent(s): 83f7e5b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -37,6 +37,8 @@ tags:
37
  # CausalLM 14B - Fully Compatible with Meta LLaMA 2
38
  Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
39
 
 
 
40
  **llama.cpp GGUF models**
41
  GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are now reuploaded.
42
 
@@ -110,6 +112,8 @@ We are currently unable to produce accurate benchmark templates for non-QA tasks
110
  # 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
111
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
112
 
 
 
113
  **llama.cpp GGUF models**
114
  GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
115
 
 
37
  # CausalLM 14B - Fully Compatible with Meta LLaMA 2
38
  Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
39
 
40
+ # Friendly reminder: If your VRAM is insufficient, you should use the 7B model instead of the quantized version.
41
+
42
  **llama.cpp GGUF models**
43
  GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are now reuploaded.
44
 
 
112
  # 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
113
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
114
 
115
+ # 友情提示:如果您的显存不足,您应该使用7B模型而不是量化版本。
116
+
117
  **llama.cpp GGUF models**
118
  GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
119