GGUF
Inference Endpoints
imatrix
conversational
fuzzy-mittenz commited on
Commit
2d037be
·
verified ·
1 Parent(s): 2afd869

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -5,6 +5,7 @@ license_link: https://huggingface.co/MadeAgents/Hammer2.1-3b/blob/main/LICENSE
5
  datasets:
6
  - Salesforce/xlam-function-calling-60k
7
  - MadeAgents/xlam-irrelevance-7.5k
 
8
  base_model: MadeAgents/Hammer2.1-3b
9
  tags:
10
  - llama-cpp
@@ -12,7 +13,7 @@ tags:
12
  ---
13
 
14
  # fuzzy-mittenz/Hammer2.1-3b-Q5_K_S-GGUF
15
- This model was converted to GGUF format from [`MadeAgents/Hammer2.1-3b`](https://huggingface.co/MadeAgents/Hammer2.1-3b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
  Refer to the [original model card](https://huggingface.co/MadeAgents/Hammer2.1-3b) for more details on the model.
17
 
18
  ## Use with llama.cpp
@@ -53,4 +54,4 @@ Step 3: Run inference through the main binary.
53
  or
54
  ```
55
  ./llama-server --hf-repo fuzzy-mittenz/Hammer2.1-3b-Q5_K_S-GGUF --hf-file hammer2.1-3b-q5_k_s-imat.gguf -c 2048
56
- ```
 
5
  datasets:
6
  - Salesforce/xlam-function-calling-60k
7
  - MadeAgents/xlam-irrelevance-7.5k
8
+ - IntelligentEstate/The_Key
9
  base_model: MadeAgents/Hammer2.1-3b
10
  tags:
11
  - llama-cpp
 
13
  ---
14
 
15
  # fuzzy-mittenz/Hammer2.1-3b-Q5_K_S-GGUF
16
+ This model was converted to GGUF format from [`MadeAgents/Hammer2.1-3b`](https://huggingface.co/MadeAgents/Hammer2.1-3b) using llama.cpp
17
  Refer to the [original model card](https://huggingface.co/MadeAgents/Hammer2.1-3b) for more details on the model.
18
 
19
  ## Use with llama.cpp
 
54
  or
55
  ```
56
  ./llama-server --hf-repo fuzzy-mittenz/Hammer2.1-3b-Q5_K_S-GGUF --hf-file hammer2.1-3b-q5_k_s-imat.gguf -c 2048
57
+ ```