Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,6 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- cs
|
4 |
-
- bert
|
5 |
-
- Transformers
|
6 |
-
- Tensorflow
|
7 |
---
|
8 |
|
9 |
# CZERT
|
@@ -47,14 +44,14 @@ We evaluate our model on two sentence level tasks:
|
|
47 |
|
48 |
|
49 |
<!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
|
50 |
-
|
51 |
|
52 |
or
|
53 |
|
54 |
self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
|
55 |
self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
|
56 |
-->
|
57 |
-
|
58 |
### Document Level Tasks
|
59 |
We evaluate our model on one document level task
|
60 |
* Multi-label Document Classification.
|
@@ -110,7 +107,7 @@ Comparison of F1 score achieved using pre-trained CZERT-A, CZERT-B, mBERT, Pavlo
|
|
110 |
|
111 |
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
|
112 |
|:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
|
113 |
-
| span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** |
|
114 |
| syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 |
|
115 |
|
116 |
SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- cs
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# CZERT
|
|
|
44 |
|
45 |
|
46 |
<!-- tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
|
47 |
+
\\tmodel = TFAlbertForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, num_labels=1)
|
48 |
|
49 |
or
|
50 |
|
51 |
self.tokenizer = BertTokenizerFast.from_pretrained(CZERT_MODEL_PATH, strip_accents=False)
|
52 |
self.model_encoder = AutoModelForSequenceClassification.from_pretrained(CZERT_MODEL_PATH, from_tf=True)
|
53 |
-->
|
54 |
+
\\t
|
55 |
### Document Level Tasks
|
56 |
We evaluate our model on one document level task
|
57 |
* Multi-label Document Classification.
|
|
|
107 |
|
108 |
| | mBERT | Pavlov | Albert-random | Czert-A | Czert-B | dep-based | gold-dep |
|
109 |
|:------:|:----------:|:----------:|:-------------:|:----------:|:----------:|:---------:|:--------:|
|
110 |
+
| span | 78.547 ± 0.110 | 79.333 ± 0.080 | 51.365 ± 0.423 | 72.254 ± 0.172 | **81.861 ± 0.102** | \\\\- | \\\\- |
|
111 |
| syntax | 90.226 ± 0.224 | 90.492 ± 0.040 | 80.747 ± 0.131 | 80.319 ± 0.054 | **91.462 ± 0.062** | 85.19 | 89.52 |
|
112 |
|
113 |
SRL results – dep columns are evaluate with labelled F1 from CoNLL 2009 evaluation script, other columns are evaluated with span F1 score same as it was used for NER evaluation. For more information see [the paper](https://arxiv.org/abs/2103.13031).
|