asahi417 commited on
Commit
5384f78
1 Parent(s): fe09c26

model update

Browse files
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - mit_restaurant
4
+ metrics:
5
+ - f1
6
+ - precision
7
+ - recall
8
+ model-index:
9
+ - name: tner/deberta-v3-large-mit-restaurant
10
+ results:
11
+ - task:
12
+ name: Token Classification
13
+ type: token-classification
14
+ dataset:
15
+ name: mit_restaurant
16
+ type: mit_restaurant
17
+ args: mit_restaurant
18
+ metrics:
19
+ - name: F1
20
+ type: f1
21
+ value: 0.8158890290037831
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.8105230191042906
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.8213265629958744
28
+ - name: F1 (macro)
29
+ type: f1_macro
30
+ value: 0.8072607717138172
31
+ - name: Precision (macro)
32
+ type: precision_macro
33
+ value: 0.7973293573334044
34
+ - name: Recall (macro)
35
+ type: recall_macro
36
+ value: 0.8183493118743246
37
+ - name: F1 (entity span)
38
+ type: f1_entity_span
39
+ value: 0.8557510999371464
40
+ - name: Precision (entity span)
41
+ type: precision_entity_span
42
+ value: 0.8474945533769063
43
+ - name: Recall (entity span)
44
+ type: recall_entity_span
45
+ value: 0.8641701047286575
46
+
47
+ pipeline_tag: token-classification
48
+ widget:
49
+ - text: "Jacob Collier is a Grammy awarded artist from England."
50
+ example_title: "NER Example 1"
51
+ ---
52
+ # tner/deberta-v3-large-mit-restaurant
53
+
54
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
55
+ [tner/mit_restaurant](https://huggingface.co/datasets/tner/mit_restaurant) dataset.
56
+ Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
57
+ for more detail). It achieves the following results on the test set:
58
+ - F1 (micro): 0.8158890290037831
59
+ - Precision (micro): 0.8105230191042906
60
+ - Recall (micro): 0.8213265629958744
61
+ - F1 (macro): 0.8072607717138172
62
+ - Precision (macro): 0.7973293573334044
63
+ - Recall (macro): 0.8183493118743246
64
+
65
+ The per-entity breakdown of the F1 score on the test set are below:
66
+ - amenity: 0.7226415094339623
67
+ - cuisine: 0.8288119738072967
68
+ - dish: 0.8283828382838284
69
+ - location: 0.8662969808995686
70
+ - money: 0.84
71
+ - rating: 0.7990430622009569
72
+ - restaurant: 0.8724489795918368
73
+ - time: 0.7004608294930875
74
+
75
+ For F1 scores, the confidence interval is obtained by bootstrap as below:
76
+ - F1 (micro):
77
+ - 90%: [0.8036180555961564, 0.8281173227233776]
78
+ - 95%: [0.8011397826491581, 0.8307029010155984]
79
+ - F1 (macro):
80
+ - 90%: [0.8036180555961564, 0.8281173227233776]
81
+ - 95%: [0.8011397826491581, 0.8307029010155984]
82
+
83
+ Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-mit-restaurant/raw/main/eval/metric.json)
84
+ and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-mit-restaurant/raw/main/eval/metric_span.json).
85
+
86
+ ### Usage
87
+ This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
88
+ ```shell
89
+ pip install tner
90
+ ```
91
+ and activate model as below.
92
+ ```python
93
+ from tner import TransformersNER
94
+ model = TransformersNER("tner/deberta-v3-large-mit-restaurant")
95
+ model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
96
+ ```
97
+ It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
98
+
99
+ ### Training hyperparameters
100
+
101
+ The following hyperparameters were used during training:
102
+ - dataset: ['tner/mit_restaurant']
103
+ - dataset_split: train
104
+ - dataset_name: None
105
+ - local_dataset: None
106
+ - model: microsoft/deberta-v3-large
107
+ - crf: True
108
+ - max_length: 128
109
+ - epoch: 15
110
+ - batch_size: 16
111
+ - lr: 1e-05
112
+ - random_seed: 42
113
+ - gradient_accumulation_steps: 4
114
+ - weight_decay: 1e-07
115
+ - lr_warmup_step_ratio: 0.1
116
+ - max_grad_norm: None
117
+
118
+ The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-mit-restaurant/raw/main/trainer_config.json).
119
+
120
+ ### Reference
121
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
122
+
123
+ ```
124
+
125
+ @inproceedings{ushio-camacho-collados-2021-ner,
126
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
127
+ author = "Ushio, Asahi and
128
+ Camacho-Collados, Jose",
129
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
130
+ month = apr,
131
+ year = "2021",
132
+ address = "Online",
133
+ publisher = "Association for Computational Linguistics",
134
+ url = "https://aclanthology.org/2021.eacl-demos.7",
135
+ doi = "10.18653/v1/2021.eacl-demos.7",
136
+ pages = "53--62",
137
+ abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
138
+ }
139
+
140
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "tner_ckpt/mit_restaurant_deberta_v3_large/best_model",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "tner_ckpt/mit_restaurant_deberta_v3_large/model_rgwuwr/epoch_5",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
eval/metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8158890290037831, "micro/f1_ci": {"90": [0.8036180555961564, 0.8281173227233776], "95": [0.8011397826491581, 0.8307029010155984]}, "micro/recall": 0.8213265629958744, "micro/precision": 0.8105230191042906, "macro/f1": 0.8072607717138172, "macro/f1_ci": {"90": [0.7937247965875296, 0.8198696966745797], "95": [0.7907943777789816, 0.8235294293836553]}, "macro/recall": 0.8183493118743246, "macro/precision": 0.7973293573334044, "per_entity_metric": {"amenity": {"f1": 0.7226415094339623, "f1_ci": {"90": [0.6933268608414239, 0.7527957577082878], "95": [0.6885539660746712, 0.7575899667952976]}, "precision": 0.7267552182163188, "recall": 0.7185741088180112}, "cuisine": {"f1": 0.8288119738072967, "f1_ci": {"90": [0.8045715060269436, 0.8507049853075287], "95": [0.800355515041021, 0.8547355413126947]}, "precision": 0.8249534450651769, "recall": 0.8327067669172933}, "dish": {"f1": 0.8283828382838284, "f1_ci": {"90": [0.7977736928104575, 0.8585744093773348], "95": [0.7932086894586895, 0.8630655911732973]}, "precision": 0.789308176100629, "recall": 0.8715277777777778}, "location": {"f1": 0.8662969808995686, "f1_ci": {"90": [0.846563505906978, 0.8860332482724753], "95": [0.84335027383273, 0.8891798434189129]}, "precision": 0.8668310727496917, "recall": 0.8657635467980296}, "money": {"f1": 0.84, "f1_ci": {"90": [0.795573832245103, 0.8812646349862259], "95": [0.7838484630163304, 0.8919160231660233]}, "precision": 0.8212290502793296, "recall": 0.8596491228070176}, "rating": {"f1": 0.7990430622009569, "f1_ci": {"90": [0.7589409831260728, 0.83568415322855], "95": [0.7492980278849697, 0.8430649055847957]}, "precision": 0.7695852534562212, "recall": 0.8308457711442786}, "restaurant": {"f1": 0.8724489795918368, "f1_ci": {"90": [0.8464719582385224, 0.897852277205427], "95": [0.8424846340100577, 0.9029754204398447]}, "precision": 0.8952879581151832, "recall": 0.8507462686567164}, "time": {"f1": 0.7004608294930875, "f1_ci": {"90": [0.6505488597424081, 0.7458471061796478], "95": [0.6403466036873743, 0.7572871589049083]}, "precision": 0.6846846846846847, "recall": 0.7169811320754716}}}
eval/metric_span.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8557510999371464, "micro/f1_ci": {"90": [0.8457549886322284, 0.866532074562069], "95": [0.8439094859106241, 0.8689014756604962]}, "micro/recall": 0.8641701047286575, "micro/precision": 0.8474945533769063, "macro/f1": 0.8557510999371464, "macro/f1_ci": {"90": [0.8457549886322284, 0.866532074562069], "95": [0.8439094859106241, 0.8689014756604962]}, "macro/recall": 0.8641701047286575, "macro/precision": 0.8474945533769063}
eval/prediction.validation.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66c995d18c8a759eacf6d17a8ca16d349899fa047816497bfe14c99950680dcd
3
- size 1736250351
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b23b1c8747da48934080fc97c1c41b16f6ad5846bfcc347b5b67f519bce0f1b8
3
+ size 1736255855
tokenizer_config.json CHANGED
@@ -4,7 +4,7 @@
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
- "name_or_path": "tner_ckpt/mit_restaurant_deberta_v3_large/best_model",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
 
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
+ "name_or_path": "tner_ckpt/mit_restaurant_deberta_v3_large/model_rgwuwr/epoch_5",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset": ["tner/mit_restaurant"], "dataset_split": "train", "dataset_name": null, "local_dataset": null, "model": "microsoft/deberta-v3-large", "crf": true, "max_length": 128, "epoch": 15, "batch_size": 16, "lr": 1e-05, "random_seed": 42, "gradient_accumulation_steps": 4, "weight_decay": 1e-07, "lr_warmup_step_ratio": 0.1, "max_grad_norm": null}