Model release
Browse files- .gitattributes +1 -0
- README.md +25 -0
- config.json +3 -0
- eval_results.txt +3 -0
- nbest_predictions.json +3 -0
- predictions.json +3 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +3 -0
- tokenizer_config.json +3 -0
- training_args.bin +3 -0
- vocab.txt +0 -0
.gitattributes
CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# oBERT-6-downstream-dense-squadv1
|
2 |
+
|
3 |
+
This model is obtained with [The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models](https://arxiv.org/abs/2203.07259).
|
4 |
+
|
5 |
+
It corresponds to the model presented in the `Table 3 - 6 Layers - 0% Sparsity`, and it represents an upper bound for performance of the corresponding pruned models:
|
6 |
+
- 80% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-80-squadv1`
|
7 |
+
- 80% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-80-squadv1`
|
8 |
+
- 90% unstructured: `neuralmagic/oBERT-6-downstream-pruned-unstructured-90-squadv1`
|
9 |
+
- 90% block-4: `neuralmagic/oBERT-6-downstream-pruned-block4-90-squadv1`
|
10 |
+
|
11 |
+
SQuADv1 dev-set:
|
12 |
+
```
|
13 |
+
EM = 81.17
|
14 |
+
F1 = 88.32
|
15 |
+
```
|
16 |
+
|
17 |
+
## BibTeX entry and citation info
|
18 |
+
```bibtex
|
19 |
+
@article{kurtic2022optimal,
|
20 |
+
title={The Optimal BERT Surgeon: Scalable and Accurate Second-Order Pruning for Large Language Models},
|
21 |
+
author={Kurtic, Eldar and Campos, Daniel and Nguyen, Tuan and Frantar, Elias and Kurtz, Mark and Fineran, Benjamin and Goin, Michael and Alistarh, Dan},
|
22 |
+
journal={arXiv preprint arXiv:2203.07259},
|
23 |
+
year={2022}
|
24 |
+
}
|
25 |
+
```
|
config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c8486603b13aa568cb30a3f12f76029a3365344a234ace7353010ea02ea15338
|
3 |
+
size 659
|
eval_results.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
exact_match = 81.17313150425733
|
2 |
+
f1 = 88.31708304864826
|
3 |
+
epoch = 30.0
|
nbest_predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cdff7676a092524f6a7df2a08f36909e2ae75ac08fc8e803744487c9742f8024
|
3 |
+
size 45797257
|
predictions.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:070486786fe3e7624e890e5fa88537286e54e5ccc6dff764b6579a6f7dbafd68
|
3 |
+
size 587303
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2e282fe52af56a28a3934849e3b31de286bb164ba6e2e8eaab154c9b81250bdc
|
3 |
+
size 265511639
|
special_tokens_map.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:303df45a03609e4ead04bc3dc1536d0ab19b5358db685b6f3da123d05ec200e3
|
3 |
+
size 112
|
tokenizer_config.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:89412368298707de0256b84629f947c7a9821afc5e4d455bd52beac1549370ec
|
3 |
+
size 362
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc36aedc0d0ee71ddd598d4b52ea47499aed772701a040c260828e8316a2abed
|
3 |
+
size 2415
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|