Model release
Browse files- .gitattributes +1 -0
- README.md +27 -0
- all_results.json +3 -0
- config.json +3 -0
- eval_nbest_predictions.json +3 -0
- eval_predictions.json +3 -0
- eval_results.json +3 -0
- pytorch_model.bin +3 -0
- recipe.yaml +8 -0
- special_tokens_map.json +3 -0
- tokenizer.json +3 -0
- tokenizer_config.json +3 -0
- train_results.json +3 -0
- trainer_state.json +3 -0
- training_args.bin +3 -0
- vocab.txt +0 -0
.gitattributes
CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# oBERT-12-downstream-dense-QAT-squadv1
|
2 |
+
|
3 |
+
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
|
4 |
+
|
5 |
+
It corresponds to the model presented in the `Table 3 - 12 Layers - 0% Sparsity - QAT`, and it represents an upper bound for performance of the corresponding pruned and quantized models:
|
6 |
+
- 80% unstructured QAT: `neuralmagic/oBERT-12-downstream-pruned-unstructured-80-QAT-squadv1`
|
7 |
+
- 80% block-4 QAT: `neuralmagic/oBERT-12-downstream-pruned-block4-80-QAT-squadv1`
|
8 |
+
- 90% unstructured QAT: `neuralmagic/oBERT-12-downstream-pruned-unstructured-90-QAT-squadv1`
|
9 |
+
- 90% block-4 QAT: `neuralmagic/oBERT-12-downstream-pruned-block4-90-QAT-squadv1`
|
10 |
+
|
11 |
+
SQuADv1 dev-set:
|
12 |
+
```
|
13 |
+
EM = 81.99
|
14 |
+
F1 = 89.06
|
15 |
+
```
|
16 |
+
|
17 |
+
Code: _coming soon_
|
18 |
+
|
19 |
+
## BibTeX entry and citation info
|
20 |
+
```bibtex
|
21 |
+
@article{kurtic2022optimal,
|
22 |
+
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
|
23 |
+
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
|
24 |
+
journal={arXiv preprint arXiv:2203.07259},
|
25 |
+
year={2022}
|
26 |
+
}
|
27 |
+
```
|
all_results.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90ba9d693b2f077c294514770a613f1041cf37ae7dd562c2542351ab8ae83409
|
3 |
+
size 253
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:508ad7853161a0e7fa0271b7be180d6c96f16ec201e47e550943c330e44072f1
|
3 |
+
size 667
|
eval_nbest_predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a13b491f8696275dd7a336c4eeaf2e82b2114b445b4848f3e9fc26acbedb4ef
|
3 |
+
size 49281900
|
eval_predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:39db53bb5479dbf0f5b1bd8a4075451cce5d5eed3e5d3db4e3b7b76007928f05
|
3 |
+
size 589777
|
eval_results.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbb45064d33bf62b64fc6cf5da775d447ee1656af8e4bb1052eb4081cd464f5f
|
3 |
+
size 115
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:91604788d029ee80969483a5617d3baf991df106b7b13534315afab1ea58206d
|
3 |
+
size 436619715
|
recipe.yaml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
!QuantizationModifier
|
2 |
+
disable_quantization_observer_epoch: 5
|
3 |
+
end_epoch: -1.0
|
4 |
+
freeze_bn_stats_epoch: 5
|
5 |
+
quantize_embeddings: 1
|
6 |
+
start_epoch: 0.0
|
7 |
+
submodules: ['bert.encoder', 'bert.embeddings', 'qa_outputs']
|
8 |
+
|
special_tokens_map.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
|
3 |
+
size 112
|
tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5fd1c882abbd30517dced455a2c9768945ec726b96727927e4959348d9de550b
|
3 |
+
size 466081
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:17e621cd1e37d5c7ab5d441e3be42d20d21ca6a8f8b2d486cf241335da3b1545
|
3 |
+
size 381
|
train_results.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d6ef6563efce5c9f86ab57aed9cd205cad71bf1fe4da7ab727f2dbda424c2df7
|
3 |
+
size 159
|
trainer_state.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e6297100047cab56d5a99b5908758b16a91b97f10972fc274a2a3d3787aa7967
|
3 |
+
size 7343
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85f58dedc2bd6797cb5b495512647b5366c50c8de2baa59aa91ee32f5136b513
|
3 |
+
size 2607
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|