What calibration data do u use?
Thanx for work! I think Phind-Codellama-34B-v2 it's the best local model ever :)
Please tell me what calibration data do you use?
Yes, so far it's the best model for coding and general IT questions I've seen.
I'm using wikitext v2 raw, particularly this file: https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/test/0000.parquet
Yes, so far it's the best model for coding and general IT questions I've seen.
I'm using wikitext v2 raw, particularly this file: https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/test/0000.parquet
Have you considered using a code dataset like The Stack(Deduped)?
@acrastt I'd not expect huge differences from that. Only 100 rows of the dataset is used during conversion with the default settings of the exllamav2 convert script, anyways. And I'd imagine that the best calibration dataset for conversion is the validation part of the dataset the model was fine-tuned on. Unfortunately, phind's dataset is not open sourced.
@acrastt I'd not expect huge differences from that. Only 100 rows of the dataset is used during conversion with the default settings of the exllamav2 convert script, anyways. And I'd imagine that the best calibration dataset for conversion is the validation part of the dataset the model was fine-tuned on. Unfortunately, phind's dataset is not open sourced.
Since it was fine-tuned for coding, I'd stick to a coding dataset to minimize the loss of coding abilities rather than general abilities.
@acrastt Yep, it does make sense, but initially I did not think calibration dataset would matter much. But it seems that the 5-bpw-evol quant is better than the one based on wikitext (https://huggingface.co/latimar/Phind-Codellama-34B-v2-exl2/discussions/4) so I'm gonna make some more quants using different datasets...
@acrastt Take a look at the new quants I made using megacode dataset, they seem to be much better: https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2
Can you share a calibration dataset based on megacode?
@Mandarin It's actually linked in the repo's README, and been there all the time -- https://huggingface.co/latimar/Phind-Codellama-34B-v2-megacode-exl2#datasets-used-for-calibration-and-ppl-measurement