The Quantized CohereForAI/c4ai-command-r-v01 Model

Original Base Model: CohereForAI/c4ai-command-r-v01.
Link: https://huggingface.co/CohereForAI/c4ai-command-r-v01

Quantization Configurations

"quantization_config": {
    "bits": 4,
    "checkpoint_format": "gptq",
    "desc_act": true,
    "dynamic": null,
    "group_size": 128,
    "lm_head": false,
    "meta": {
      "damp_auto_increment": 0.0025,
      "damp_percent": 0.01,
      "mse": 0.0,
      "quantizer": [
        "gptqmodel:1.4.5"
      ],
      "static_groups": false,
      "true_sequential": true,
      "uri": "https://github.com/modelcloud/gptqmodel"
    },
    "quant_method": "gptq",
    "sym": true
  },

Source Codes

Source Codes: https://github.com/vkola-lab/medpodgpt/tree/main/quantization.

Downloads last month
2
Safetensors
Model size
8.6B params
Tensor type
BF16
I32
FP16
Inference API
Unable to determine this model's library. Check the docs .