MotherEarth commited on
Commit
b91abb7
1 Parent(s): 30437cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: MotherEarth/MotherEarth_1.2
3
  library_name: transformers
4
  tags:
5
  - mergekit
@@ -20,9 +20,9 @@ tags:
20
  - proverbs
21
  ---
22
 
23
- # MotherEarth/MotherEarth_1.2-Q4_K_M-GGUF
24
- This model was converted to GGUF format from [`MotherEarth/MotherEarth_1.2`](https://huggingface.co/MotherEarth/MotherEarth_1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
25
- Refer to the [original model card](https://huggingface.co/MotherEarth/MotherEarth_1.2) for more details on the model.
26
 
27
 
28
  NEEDS still a system prompt
@@ -124,12 +124,12 @@ Invoke the llama.cpp server or the CLI.
124
 
125
  ### CLI:
126
  ```bash
127
- llama-cli --hf-repo MotherEarth/MotherEarth_1.2-Q4_K_M-GGUF --hf-file motherearth_1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
128
  ```
129
 
130
  ### Server:
131
  ```bash
132
- llama-server --hf-repo MotherEarth/MotherEarth_1.2-Q4_K_M-GGUF --hf-file motherearth_1.2-q4_k_m.gguf -c 2048
133
  ```
134
 
135
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -146,9 +146,9 @@ cd llama.cpp && LLAMA_CURL=1 make
146
 
147
  Step 3: Run inference through the main binary.
148
  ```
149
- ./llama-cli --hf-repo MotherEarth/MotherEarth_1.2-Q4_K_M-GGUF --hf-file motherearth_1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
150
  ```
151
  or
152
  ```
153
- ./llama-server --hf-repo MotherEarth/MotherEarth_1.2-Q4_K_M-GGUF --hf-file motherearth_1.2-q4_k_m.gguf -c 2048
154
  ```
 
1
  ---
2
+ base_model: MotherEarth/MotherEarth-1.1-8B
3
  library_name: transformers
4
  tags:
5
  - mergekit
 
20
  - proverbs
21
  ---
22
 
23
+ # MotherEarth/MotherEarth-1.1-GGUF
24
+ This model was converted to GGUF format from [`MotherEarth/MotherEarth_1.1`](https://huggingface.co/MotherEarth/MotherEarth_1.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
25
+ Refer to the [original model card](https://huggingface.co/MotherEarth/MotherEarth_1.1) for more details on the model.
26
 
27
 
28
  NEEDS still a system prompt
 
124
 
125
  ### CLI:
126
  ```bash
127
+ llama-cli --hf-repo MotherEarth/MotherEarth_1.1-Q4_K_M-GGUF --hf-file motherearth_1.1-q4_k_m.gguf -p "The meaning to life and the universe is"
128
  ```
129
 
130
  ### Server:
131
  ```bash
132
+ llama-server --hf-repo MotherEarth/MotherEarth_1.1-Q4_K_M-GGUF --hf-file motherearth_1.1-q4_k_m.gguf -c 2048
133
  ```
134
 
135
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
146
 
147
  Step 3: Run inference through the main binary.
148
  ```
149
+ ./llama-cli --hf-repo MotherEarth/MotherEarth_1.1-Q4_K_M-GGUF --hf-file motherearth_1.1-q4_k_m.gguf -p "The meaning to life and the universe is"
150
  ```
151
  or
152
  ```
153
+ ./llama-server --hf-repo MotherEarth/MotherEarth_1.1-Q4_K_M-GGUF --hf-file motherearth_1.1-q4_k_m.gguf -c 2048
154
  ```