fuzzy-mittenz commited on
Commit
0224772
·
verified ·
1 Parent(s): 6dcbc6a

Update README.md

Browse files

![kegger.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/5ACNGdhtREZeAoGQRjYH4.png)

Files changed (1) hide show
  1. README.md +5 -35
README.md CHANGED
@@ -21,13 +21,14 @@ tags:
21
  - Relation Extraction
22
  - LLaMA
23
  - llama-cpp
24
- - gguf-my-repo
25
  base_model: THU-KEG/ADELIE-DPO-1.5B
26
  ---
27
 
28
- # fuzzy-mittenz/ADELIE-DPO-1.5B-Q8_0-GGUF
29
- This model was converted to GGUF format from [`THU-KEG/ADELIE-DPO-1.5B`](https://huggingface.co/THU-KEG/ADELIE-DPO-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
30
- Refer to the [original model card](https://huggingface.co/THU-KEG/ADELIE-DPO-1.5B) for more details on the model.
 
31
 
32
  ## Use with llama.cpp
33
  Install llama.cpp through brew (works on Mac and Linux)
@@ -37,34 +38,3 @@ brew install llama.cpp
37
 
38
  ```
39
  Invoke the llama.cpp server or the CLI.
40
-
41
- ### CLI:
42
- ```bash
43
- llama-cli --hf-repo fuzzy-mittenz/ADELIE-DPO-1.5B-Q8_0-GGUF --hf-file adelie-dpo-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
44
- ```
45
-
46
- ### Server:
47
- ```bash
48
- llama-server --hf-repo fuzzy-mittenz/ADELIE-DPO-1.5B-Q8_0-GGUF --hf-file adelie-dpo-1.5b-q8_0.gguf -c 2048
49
- ```
50
-
51
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
52
-
53
- Step 1: Clone llama.cpp from GitHub.
54
- ```
55
- git clone https://github.com/ggerganov/llama.cpp
56
- ```
57
-
58
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
59
- ```
60
- cd llama.cpp && LLAMA_CURL=1 make
61
- ```
62
-
63
- Step 3: Run inference through the main binary.
64
- ```
65
- ./llama-cli --hf-repo fuzzy-mittenz/ADELIE-DPO-1.5B-Q8_0-GGUF --hf-file adelie-dpo-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
66
- ```
67
- or
68
- ```
69
- ./llama-server --hf-repo fuzzy-mittenz/ADELIE-DPO-1.5B-Q8_0-GGUF --hf-file adelie-dpo-1.5b-q8_0.gguf -c 2048
70
- ```
 
21
  - Relation Extraction
22
  - LLaMA
23
  - llama-cpp
24
+
25
  base_model: THU-KEG/ADELIE-DPO-1.5B
26
  ---
27
 
28
+ # IntelligentEstate/Keg_Party-DPO-1.5B-Q8_0-GGUF
29
+ This model was converted to GGUF format from [`THU-KEG/ADELIE-DPO-1.5B`](https://huggingface.co/THU-KEG/ADELIE-DPO-1.5B) using llama.cpp
30
+
31
+ ![kegger.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/5ACNGdhtREZeAoGQRjYH4.png)
32
 
33
  ## Use with llama.cpp
34
  Install llama.cpp through brew (works on Mac and Linux)
 
38
 
39
  ```
40
  Invoke the llama.cpp server or the CLI.