latimar's picture
Add 2.8 quant weights
07fdefc verified
|
raw
history blame
2.93 kB
metadata
base_model: https://huggingface.co/Phind/Phind-CodeLlama-34B-v2
inference: false
license: llama2
model_creator: https://huggingface.co/Phind
model_name: Phind-Codellama-34B-v2
model_type: llama
quantized_by: latimar

Phind-CodeLlama-34B-v2 EXL2

Weights of Phind-CodeLlama-34B-v2 converted to EXL2 format.

Converted with the ExllamaV2 convert.py script, exllamav2 commit

BPW (hb=8) Human-Eval Evol-Ins PPL Wiki PPL File Size (Gb)
2.55 0.402439 2.0944 18.9843 10.62
2.8 0.634146 2.0814 17.6326 11.58
3.0 0.664634 2.0600 11.2096 12.36
4.625 0.701219 2.0401 6.7243 18.63

Datasets used for calibration and PPL measurement

Conversion

Conversion arguments:

convert.py -i ${MODEL_DIR_FP16} -o ${WIP_DIR} -cf ${MODEL_DIR_EXL} -c ${CALIBRATION_DATASET} -r 200 -mr 32 -l 4096 -ml 4096 -hb 8 -b ${BPW}

2.55 quant was converted using even more raws: -r 400 -mr 64

Perplexity

Perplexity was measured with test_inference.py script:

test_inference.py -m ${MODEL_DIR_EXL} -ed ${PPL_DATASET}

Human-Eval

For the point of reference, Phind says that the original model achieves 73.8 Human-Eval score.

Samples for the Human-Eval scores of EXL2 quants were generated with exl2.human-eval.py script:

python exl2.human-eval.py -m ${MODEL_DIR_EXL2} -c 4096 -o ${BPW}-samples.jsonl

Unfortunately, FP16/INT8 weights of this model won't fit on my RTX 4090, but FP16 quantized to NF4 fits, so I generated samples with tf.human-eval.py script:

python tf.human-eval.py -m ${MODEL_DIR_FP16} -o nf4-samples.jsonl

NF4 variant gives us 0.70731707

As another reference, EXL2 3.2-bpw quant of this model by firelzrd yields 0.609756 Human-Eval score.