Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Quantization

Quantized using the default exllamav2 quantization script/dataset, with the following changes:

  • Context length for the calibration/quantization phases were both forced to 8192, as the script does not respect CLI changes by default and simply uses 512/2048 as context lengths.
  • Fewer rows, but ultimately, much more data was used.
  • A few rows of an "extra" dataset, with some examples of long, coherent text and this model's chat tokens, were added to the dataset.

The goal is less degredation from quantization at long context. But I tried to stay as close to default exl2 quantization parameters as possible, as straying too far from them only seems to degrade performance.

image/png

EVA-Gutenberg3-Qwen2.5-32B

EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.

Method

ORPO tuned with 8x A100 for 2 epochs.

Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Downtown-Case/nbeerbower_EVA-Gutenberg3-Qwen2.5-32B-exl2-4.5bpw-8K-Cal

Base model

Qwen/Qwen2.5-32B
Quantized
(22)
this model

Datasets used to train Downtown-Case/nbeerbower_EVA-Gutenberg3-Qwen2.5-32B-exl2-4.5bpw-8K-Cal