fuzzy-mittenz commited on
Commit
aa9c464
·
verified ·
1 Parent(s): 35dde20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -5,13 +5,14 @@ tags:
5
  - mergekit
6
  - merge
7
  - llama-cpp
8
- - gguf-my-repo
9
  license: apache-2.0
10
  ---
11
 
12
  # fuzzy-mittenz/3Blarenegv3-ECE-PRYMMAL-Martial-Q4_K_M-GGUF
13
- This model was converted to GGUF format from [`brgx53/3Blarenegv3-ECE-PRYMMAL-Martial`](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
- Refer to the [original model card](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) for more details on the model.
 
 
15
 
16
  ## Use with llama.cpp
17
  Install llama.cpp through brew (works on Mac and Linux)
@@ -51,4 +52,4 @@ Step 3: Run inference through the main binary.
51
  or
52
  ```
53
  ./llama-server --hf-repo fuzzy-mittenz/3Blarenegv3-ECE-PRYMMAL-Martial-Q4_K_M-GGUF --hf-file 3blarenegv3-ece-prymmal-martial-q4_k_m.gguf -c 2048
54
- ```
 
5
  - mergekit
6
  - merge
7
  - llama-cpp
 
8
  license: apache-2.0
9
  ---
10
 
11
  # fuzzy-mittenz/3Blarenegv3-ECE-PRYMMAL-Martial-Q4_K_M-GGUF
12
+
13
+ Q6 available at [From-the-Ashes](https://huggingface.co/IntelligentEstate/Prymmal-From_The_Ashes-Q6_k-GGUF)
14
+
15
+ This model was converted to GGUF format from [`brgx53/3Blarenegv3-ECE-PRYMMAL-Martial`](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) using llama.cpp. Refer to the [original model card](https://huggingface.co/brgx53/3Blarenegv3-ECE-PRYMMAL-Martial) for more details on the model.
16
 
17
  ## Use with llama.cpp
18
  Install llama.cpp through brew (works on Mac and Linux)
 
52
  or
53
  ```
54
  ./llama-server --hf-repo fuzzy-mittenz/3Blarenegv3-ECE-PRYMMAL-Martial-Q4_K_M-GGUF --hf-file 3blarenegv3-ece-prymmal-martial-q4_k_m.gguf -c 2048
55
+ ```