|
INFO:nncf:Not adding activation input quantizer for operation: 7 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/NNCFEmbedding[position_embeddings]/embedding_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 4 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/NNCFEmbedding[word_embeddings]/embedding_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 5 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/NNCFEmbedding[token_type_embeddings]/embedding_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 6 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 8 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/__iadd___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 9 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 10 BertForSequenceClassification/BertModel[bert]/BertEmbeddings[embeddings]/Dropout[dropout]/dropout_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 23 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 26 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 32 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 33 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 38 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 39 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[0]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 52 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 55 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 61 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 62 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 67 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 68 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[1]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 81 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 84 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 90 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 91 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 96 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 97 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[2]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 110 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 113 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 119 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 120 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 125 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 126 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[3]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 139 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 142 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 148 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 149 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 154 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 155 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[4]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 168 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertAttention[attention]/BertSelfAttention[self]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 171 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertAttention[attention]/BertSelfAttention[self]/matmul_1 |
|
INFO:nncf:Not adding activation input quantizer for operation: 177 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertAttention[attention]/BertSelfOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 178 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertAttention[attention]/BertSelfOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 183 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertOutput[output]/__add___0 |
|
INFO:nncf:Not adding activation input quantizer for operation: 184 BertForSequenceClassification/BertModel[bert]/BertEncoder[encoder]/ModuleList[layer]/BertLayer[5]/BertOutput[output]/NNCFLayerNorm[LayerNorm]/layer_norm_0 |
|
INFO:nncf:Collecting tensor statistics |β | 4 / 38 |
|
INFO:nncf:Collecting tensor statistics |βββ | 8 / 38 |
|
INFO:nncf:Collecting tensor statistics |βββββ | 12 / 38 |
|
INFO:nncf:Collecting tensor statistics |ββββββ | 16 / 38 |
|
INFO:nncf:Collecting tensor statistics |ββββββββ | 20 / 38 |
|
INFO:nncf:Collecting tensor statistics |ββββββββββ | 24 / 38 |
|
INFO:nncf:Collecting tensor statistics |βββββββββββ | 28 / 38 |
|
INFO:nncf:Collecting tensor statistics |βββββββββββββ | 32 / 38 |
|
INFO:nncf:Collecting tensor statistics |βββββββββββββββ | 36 / 38 |
|
INFO:nncf:Collecting tensor statistics |ββββββββββββββββ| 38 / 38 |
|
INFO:nncf:Compiling and loading torch extension: quantized_functions_cuda... |
|
INFO:nncf:Finished loading torch extension: quantized_functions_cuda |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
INFO:nncf:Statistics of the quantization algorithm: |
|
Epoch 0 |+--------------------------------+-------+ |
|
Epoch 0 || Statistic's name | Value | |
|
Epoch 0 |+================================+=======+ |
|
Epoch 0 || Ratio of enabled quantizations | 100 | |
|
Epoch 0 |+--------------------------------+-------+ |
|
Epoch 0 | |
|
Epoch 0 |Statistics of the quantization share: |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Statistic's name | Value | |
|
Epoch 0 |+==================================+====================+ |
|
Epoch 0 || Symmetric WQs / All placed WQs | 100.00 % (38 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Asymmetric WQs / All placed WQs | 0.00 % (0 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Signed WQs / All placed WQs | 100.00 % (38 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Unsigned WQs / All placed WQs | 0.00 % (0 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Per-tensor WQs / All placed WQs | 0.00 % (0 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Per-channel WQs / All placed WQs | 100.00 % (38 / 38) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Placed WQs / Potential WQs | 70.37 % (38 / 54) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Symmetric AQs / All placed AQs | 24.00 % (12 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Asymmetric AQs / All placed AQs | 76.00 % (38 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Signed AQs / All placed AQs | 100.00 % (50 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Unsigned AQs / All placed AQs | 0.00 % (0 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Per-tensor AQs / All placed AQs | 100.00 % (50 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 || Per-channel AQs / All placed AQs | 0.00 % (0 / 50) | |
|
Epoch 0 |+----------------------------------+--------------------+ |
|
Epoch 0 | |
|
Epoch 0 |Statistics of the bitwidth distribution: |
|
Epoch 0 |+--------------+---------------------+--------------------+--------------------+ |
|
Epoch 0 || Num bits (N) | N-bits WQs / Placed | N-bits AQs / | N-bits Qs / Placed | |
|
Epoch 0 || | WQs | Placed AQs | Qs | |
|
Epoch 0 |+==============+=====================+====================+====================+ |
|
Epoch 0 || 8 | 100.00 % (38 / 38) | 100.00 % (50 / 50) | 100.00 % (88 / 88) | |
|
Epoch 0 |+--------------+---------------------+--------------------+--------------------+ |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
WARNING:nncf:You are setting `forward` on an NNCF-processed model object. |
|
NNCF relies on custom-wrapping the `forward` call in order to function properly. |
|
Arbitrary adjustments to the forward function on an NNCFNetwork object have undefined behavior. |
|
If you need to replace the underlying forward function of the original model so that NNCF should be using that instead of the original forward function that NNCF saved during the compressed model creation, you can do this by calling: |
|
model.nncf.set_original_unbound_forward(fn) |
|
if `fn` has an unbound 0-th `self` argument, or |
|
with model.nncf.temporary_bound_original_forward(fn): ... |
|
if `fn` already had 0-th `self` argument bound or never had it in the first place. |
|
|