Transformers
GGUF
llama
TheBloke commited on
Commit
0216ed5
·
1 Parent(s): deb2320

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -171,7 +171,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
171
  from ctransformers import AutoModelForCausalLM
172
 
173
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
174
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardMath-7B-V1.0-GGML", model_file="wizardmath-7b-v1.0.q4_K_M.gguf", model_type="llama", gpu_layers=50)
175
 
176
  print(llm("AI is going to"))
177
  ```
 
171
  from ctransformers import AutoModelForCausalLM
172
 
173
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
174
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardMath-7B-V1.0-GGUF", model_file="wizardmath-7b-v1.0.q4_K_M.gguf", model_type="llama", gpu_layers=50)
175
 
176
  print(llm("AI is going to"))
177
  ```