alexmarques
commited on
Commit
•
c1dce56
1
Parent(s):
fb30d43
Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,6 @@ Weight quantization also reduces disk size requirements by approximately 50%.
|
|
46 |
Only weights and activations of the linear operators within transformers blocks are quantized.
|
47 |
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
48 |
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
49 |
-
Linear scaling factors are computed via by minimizing the mean squarred error (MSE).
|
50 |
The [SmoothQuant](https://arxiv.org/abs/2211.10438) algorithm is used to alleviate outliers in the activations, whereas rhe [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization.
|
51 |
Both algorithms are implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
52 |
GPTQ used a 1% damping factor and 512 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
|
@@ -124,7 +123,6 @@ recipe = [
|
|
124 |
scheme="W8A8",
|
125 |
ignore=["lm_head"],
|
126 |
dampening_frac=0.01,
|
127 |
-
observer="mse",
|
128 |
)
|
129 |
]
|
130 |
|
|
|
46 |
Only weights and activations of the linear operators within transformers blocks are quantized.
|
47 |
Weights are quantized with a symmetric static per-channel scheme, where a fixed linear scaling factor is applied between INT8 and floating point representations for each output channel dimension.
|
48 |
Activations are quantized with a symmetric dynamic per-token scheme, computing a linear scaling factor at runtime for each token between INT8 and floating point representations.
|
|
|
49 |
The [SmoothQuant](https://arxiv.org/abs/2211.10438) algorithm is used to alleviate outliers in the activations, whereas rhe [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization.
|
50 |
Both algorithms are implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library.
|
51 |
GPTQ used a 1% damping factor and 512 sequences sequences taken from Neural Magic's [LLM compression calibration dataset](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration).
|
|
|
123 |
scheme="W8A8",
|
124 |
ignore=["lm_head"],
|
125 |
dampening_frac=0.01,
|
|
|
126 |
)
|
127 |
]
|
128 |
|