TheBloke commited on
Commit
c810947
1 Parent(s): 7f9315a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
28
 
29
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GPTQ)
30
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GGML)
31
- * [Original unquantised fp16 model in HF format](https://huggingface.co/bofenghuang/vigogne-instruct-13b)
32
 
33
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
34
 
 
28
 
29
  * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GPTQ)
30
  * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-GGML)
31
+ * [Unquantised fp16 model in HF format](https://huggingface.co/TheBloke/Vigogne-Instruct-13B-HF)
32
 
33
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
34