JustinLin610 commited on
Commit
9fca4ce
1 Parent(s): fe013ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
12
 
13
  ## Introduction
14
 
15
- Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model (57B-A14B). This repo contains the instruction-tuned 72B Qwen2 model.
16
 
17
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
18
 
@@ -37,7 +37,7 @@ In the following demonstration, we assume that you are running commands under th
37
  ## How to use
38
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
39
  ```shell
40
- huggingface-cli download Qwen/Qwen2-72B-Instruct-GGUF qwen2-57b-a14b-instruct-q4_0.gguf --local-dir . --local-dir-use-symlinks False
41
  ```
42
 
43
  However, for large files, we split them into multiple segments due to the limitation of 50G for a single file to be uploaded.
@@ -48,7 +48,7 @@ qwen2-57b-a14b-instruct-q8_0-00001-of-00002.gguf
48
  qwen2-57b-a14b-instruct-q8_0-00002-of-00002.gguf
49
  ```
50
 
51
- They share the prefix of `qwen2-72b-instruct-q5_k_m`, but have their own suffix for indexing respectively, say `-00001-of-00002`.
52
  To use the split GGUF files, you need to merge them first with the command `llama-gguf-split` as shown below:
53
 
54
  ```bash
 
12
 
13
  ## Introduction
14
 
15
+ Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model (57B-A14B).
16
 
17
  Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
18
 
 
37
  ## How to use
38
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
39
  ```shell
40
+ huggingface-cli download Qwen/Qwen2-57B-A14B-Instruct-GGUF qwen2-57b-a14b-instruct-q4_0.gguf --local-dir . --local-dir-use-symlinks False
41
  ```
42
 
43
  However, for large files, we split them into multiple segments due to the limitation of 50G for a single file to be uploaded.
 
48
  qwen2-57b-a14b-instruct-q8_0-00002-of-00002.gguf
49
  ```
50
 
51
+ They share the prefix of `qwen2-57b-a14b-instruct-q5_k_m`, but have their own suffix for indexing respectively, say `-00001-of-00002`.
52
  To use the split GGUF files, you need to merge them first with the command `llama-gguf-split` as shown below:
53
 
54
  ```bash