SeaLLM-7B-v2.5-vi-pubmed-GPTQ / quantize_config.json
tminh's picture
AutoGPTQ model for SeaLLMs/SeaLLM-7B-v2.5: 4bits, gr128, desc_act=False
d5b1d11 verified
raw
history blame
314 Bytes
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": false,
"static_groups": false,
"sym": false,
"true_sequential": true,
"model_name_or_path": "SeaLLM-7B-v2.5-vi-pubmed-GPTQ",
"model_file_base_name": "gptq_model-4bit-128g",
"quant_method": "gptq",
"checkpoint_format": "gptq"
}