Update README
Browse files
README.md
CHANGED
@@ -26,6 +26,33 @@ Original model in full weights achieves **73.8** HumanEval score. Here are EXL2
|
|
26 |
| 4.625 | **70.12** | 2.0401 | 6.7243 | 18.63 |
|
27 |
| 4.8 | **70.73** | 2.0361 | 6.7263 | 19.32 |
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
## Datasets used for calibration and PPL measurement
|
30 |
|
31 |
* [Calibration](https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k)
|
|
|
26 |
| 4.625 | **70.12** | 2.0401 | 6.7243 | 18.63 |
|
27 |
| 4.8 | **70.73** | 2.0361 | 6.7263 | 19.32 |
|
28 |
|
29 |
+
## Downloads
|
30 |
+
|
31 |
+
If you just do `git clone` you will get weights of all the quants, which is probably not
|
32 |
+
what you want. You need to download (and put in the same dir) the following common files:
|
33 |
+
|
34 |
+
* [config.json](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/config.json)
|
35 |
+
* [generation_config.json](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/generation_config.json)
|
36 |
+
* [special_tokens_map.json](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/blob/main/special_tokens_map.json)
|
37 |
+
* [tokenizer.model](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/tokenizer.model)
|
38 |
+
* [tokenizer_config.json](https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/tokenizer_config.json)
|
39 |
+
|
40 |
+
And the weights of a particular quant: all safetensors files + `model.safetensors.index.json` file from the quant directory.
|
41 |
+
|
42 |
+
Either download these files via the Web UI, or, e.g., with curl:
|
43 |
+
```
|
44 |
+
mkdir phind-2.55
|
45 |
+
cd phind-2.55
|
46 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/config.json
|
47 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/generation_config.json
|
48 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/blob/main/special_tokens_map.json
|
49 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/tokenizer.model
|
50 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/tokenizer_config.json
|
51 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/2.55/model.safetensors.index.json
|
52 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/2.55/output-00001-of-00002.safetensors
|
53 |
+
curl -LO https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2/raw/main/2.55/output-00002-of-00002.safetensors
|
54 |
+
```
|
55 |
+
|
56 |
## Datasets used for calibration and PPL measurement
|
57 |
|
58 |
* [Calibration](https://huggingface.co/datasets/rombodawg/2XUNCENSORED_MegaCodeTraining188k)
|