avemio-digital commited on
Commit
f6f5847
·
verified ·
1 Parent(s): fbf0267

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -6,16 +6,16 @@ tags:
6
  - sft
7
  - llama-cpp
8
  - gguf-my-repo
9
- base_model: avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE
10
  license: mit
11
  language:
12
  - de
13
  - en
14
  ---
15
 
16
- # avemio-digital/GRAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF
17
- This model was converted to GGUF format from [`avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE`](https://huggingface.co/avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
- Refer to the [original model card](https://huggingface.co/avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE) for more details on the model.
19
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
@@ -28,12 +28,12 @@ Invoke the llama.cpp server or the CLI.
28
 
29
  ### CLI:
30
  ```bash
31
- llama-cli --hf-repo avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE_Q8_0-GGUF --hf-file grag-qwen-r1-deep-thinking-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
32
  ```
33
 
34
  ### Server:
35
  ```bash
36
- llama-server --hf-repo avemio/GRAG-R1-Distill-QWEN-14B-SFT-DE_Q8_0-GGUF --hf-file grag-qwen-r1-deep-thinking-reasoning-q8_0.gguf -c 2048
37
  ```
38
 
39
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -50,9 +50,9 @@ cd llama.cpp && LLAMA_CURL=1 make
50
 
51
  Step 3: Run inference through the main binary.
52
  ```
53
- ./llama-cli --hf-repo avemio/GRAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF --hf-file grag-qwen-r1-deep-thinking-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
54
  ```
55
  or
56
  ```
57
- ./llama-server --hf-repo avemio/GRAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF --hf-file grag-qwen-r1-deep-thinking-reasoning-q8_0.gguf -c 2048
58
  ```
 
6
  - sft
7
  - llama-cpp
8
  - gguf-my-repo
9
+ base_model: avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE
10
  license: mit
11
  language:
12
  - de
13
  - en
14
  ---
15
 
16
+ # avemio-digital/German-RAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF
17
+ This model was converted to GGUF format from [`avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE`](https://huggingface.co/avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
18
+ Refer to the [original model card](https://huggingface.co/avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE) for more details on the model.
19
 
20
  ## Use with llama.cpp
21
  Install llama.cpp through brew (works on Mac and Linux)
 
28
 
29
  ### CLI:
30
  ```bash
31
+ llama-cli --hf-repo avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE_Q8_0-GGUF --hf-file German-RAG-qwen-r1-deep-thinking-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
32
  ```
33
 
34
  ### Server:
35
  ```bash
36
+ llama-server --hf-repo avemio/German-RAG-R1-Distill-QWEN-14B-SFT-DE_Q8_0-GGUF --hf-file German-RAG-qwen-r1-deep-thinking-reasoning-q8_0.gguf -c 2048
37
  ```
38
 
39
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
50
 
51
  Step 3: Run inference through the main binary.
52
  ```
53
+ ./llama-cli --hf-repo avemio/German-RAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF --hf-file German-RAG-qwen-r1-deep-thinking-reasoning-q8_0.gguf -p "The meaning to life and the universe is"
54
  ```
55
  or
56
  ```
57
+ ./llama-server --hf-repo avemio/German-RAG-QWEN-R1-DEEP-THINKING-REASONING-Q8_0-GGUF --hf-file German-RAG-qwen-r1-deep-thinking-reasoning-q8_0.gguf -c 2048
58
  ```