fuzzy-mittenz commited on
Commit
37b83e8
·
verified ·
1 Parent(s): 4152e4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,9 +7,9 @@ tags:
7
  - llama-cpp
8
  - gguf-my-repo
9
  ---
10
-
11
  # fuzzy-mittenz/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0-IQ4_NL-GGUF
12
- This model was converted to GGUF format from [`BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0`](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0) for more details on the model.
14
 
15
  ## Use with llama.cpp
 
7
  - llama-cpp
8
  - gguf-my-repo
9
  ---
10
+ Test
11
  # fuzzy-mittenz/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0-IQ4_NL-GGUF
12
+ This model was converted to GGUF format from [`BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0`](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0) using llama.cpp
13
  Refer to the [original model card](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-Instruct-abliterated-TIES-v2.0) for more details on the model.
14
 
15
  ## Use with llama.cpp