apepkuss79 commited on
Commit
a4e2b7d
·
verified ·
1 Parent(s): 1ba2888

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -34,7 +34,7 @@ tags:
34
 
35
  - LlamaEdge version: coming soon
36
 
37
- <!-- - LlamaEdge version: [v0.12.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.1) and above
38
 
39
  - Prompt template
40
 
@@ -47,11 +47,11 @@ tags:
47
  {system_message}<|user|>
48
  {user_message_1}<|assistant|>
49
  {assistant_message_1}
50
- ``` -->
51
 
52
  - Context size: `128000`
53
 
54
- <!-- - Run as LlamaEdge service
55
 
56
  ```bash
57
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:glm-4-9b-chat-Q5_K_M.gguf \
@@ -69,7 +69,7 @@ tags:
69
  llama-chat.wasm \
70
  --prompt-template glm-4-chat \
71
  --ctx-size 128000
72
- ``` -->
73
 
74
  ## Quantized GGUF Models
75
 
 
34
 
35
  - LlamaEdge version: coming soon
36
 
37
+ <!-- - LlamaEdge version: [v0.12.1](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.12.1) and above -->
38
 
39
  - Prompt template
40
 
 
47
  {system_message}<|user|>
48
  {user_message_1}<|assistant|>
49
  {assistant_message_1}
50
+ ```
51
 
52
  - Context size: `128000`
53
 
54
+ - Run as LlamaEdge service
55
 
56
  ```bash
57
  wasmedge --dir .:. --nn-preload default:GGML:AUTO:glm-4-9b-chat-Q5_K_M.gguf \
 
69
  llama-chat.wasm \
70
  --prompt-template glm-4-chat \
71
  --ctx-size 128000
72
+ ```
73
 
74
  ## Quantized GGUF Models
75