Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ Llama.cpp command-r pre-tokenizer gguf fixed
|
|
9 |
main: build = 2789 (84250014)
|
10 |
main: built with gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0 for x86_64-linux-gnu
|
11 |
main: quantizing '/gguf/c4ai-commandr-v01_a.gguf' to '/gguf/c4ai-command-r-v01-Q5_K_M.gguf' as Q5_K_M
|
12 |
-
llama_model_loader: loaded meta data with 26 key-value pairs and 322 tensors from
|
13 |
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
14 |
llama_model_loader: - kv 0: general.architecture str = command-r
|
15 |
llama_model_loader: - kv 1: command-r.block_count u32 = 40
|
|
|
9 |
main: build = 2789 (84250014)
|
10 |
main: built with gcc (Ubuntu 13.2.0-4ubuntu3) 13.2.0 for x86_64-linux-gnu
|
11 |
main: quantizing '/gguf/c4ai-commandr-v01_a.gguf' to '/gguf/c4ai-command-r-v01-Q5_K_M.gguf' as Q5_K_M
|
12 |
+
llama_model_loader: loaded meta data with 26 key-value pairs and 322 tensors from c4ai-commandr-v01_a.gguf (version GGUF V3 (latest))
|
13 |
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
14 |
llama_model_loader: - kv 0: general.architecture str = command-r
|
15 |
llama_model_loader: - kv 1: command-r.block_count u32 = 40
|