Text Generation
GGUF
Indonesian
English
Ichsan2895 commited on
Commit
728e108
1 Parent(s): 2e54d9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -82,7 +82,7 @@ pip3 install huggingface-hub
82
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
83
 
84
  ```shell
85
- huggingface-cli download Ichsan2895/Merak-7B-v3-GGUF Merak-7B-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
86
  ```
87
 
88
  <details>
@@ -105,7 +105,7 @@ pip3 install hf_transfer
105
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
106
 
107
  ```shell
108
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Ichsan2895/Merak-7B-v3-GGUF Merak-7B-v3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
109
  ```
110
 
111
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
@@ -118,7 +118,7 @@ Windows Command Line users: You can set the environment variable by running `set
118
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
119
 
120
  ```shell
121
- ./main -ngl 32 -m Merak-7B-v3.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
122
  ```
123
 
124
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -160,7 +160,7 @@ CT_METAL=1 pip install ctransformers --no-binary ctransformers
160
  from ctransformers import AutoModelForCausalLM
161
 
162
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
163
- llm = AutoModelForCausalLM.from_pretrained("Ichsan2895/Merak-7B-v3-GGUF", model_file="Merak-7B-v3-model-q4_k_m.gguf", model_type="mistral", gpu_layers=50)
164
 
165
  print(llm("AI is going to"))
166
  ```
 
82
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
83
 
84
  ```shell
85
+ huggingface-cli download Ichsan2895/Merak-7B-v3-GGUF Merak-7B-v3-model-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
86
  ```
87
 
88
  <details>
 
105
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
106
 
107
  ```shell
108
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download Ichsan2895/Merak-7B-v3-GGUF Merak-7B-v3-model-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
109
  ```
110
 
111
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
 
118
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
119
 
120
  ```shell
121
+ ./main -ngl 32 -m Merak-7B-v3-model-q5_k_m.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
122
  ```
123
 
124
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
160
  from ctransformers import AutoModelForCausalLM
161
 
162
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
163
+ llm = AutoModelForCausalLM.from_pretrained("Ichsan2895/Merak-7B-v3-GGUF", model_file="Merak-7B-v3-model-q5_k_m.gguf", model_type="mistral", gpu_layers=50)
164
 
165
  print(llm("AI is going to"))
166
  ```