“WadoodAbdul” commited on
Commit
28687f6
·
1 Parent(s): 88700dd

updated links

Browse files
Files changed (1) hide show
  1. src/about.py +4 -4
src/about.py CHANGED
@@ -110,10 +110,10 @@ $$ Recall = COR / (COR + INC + MIS)$$
110
  $$ f1score = 2 * (Prec * Rec) / (Prec + Rec)$$
111
 
112
  Note:
113
- 1. Span-based approach here is equivalent to the 'Span Based Evaluation with Partial Overlap' in (NER Metrics Showdown!)[https://huggingface.co/spaces/wadood/ner_evaluation_metrics] and is equivalent to Partial Match ("Type") in the nervaluate python package.
114
- 2. Token-based approach here is equivalent to the 'Token Based Evaluation With Macro Average' in (NER Metrics Showdown!)[https://huggingface.co/spaces/wadood/ner_evaluation_metrics]
115
 
116
- Additional examples can be tested on the (NER Metrics Showdown!)[https://huggingface.co/spaces/wadood/ner_evaluation_metrics] huggingface space.
117
 
118
  ## Datasets
119
  The following datasets (test splits only) have been included in the evaluation.
@@ -229,7 +229,7 @@ Users are advised to approach the results with an understanding of the inherent
229
  EVALUATION_QUEUE_TEXT = """
230
 
231
  Currently, the benchmark supports evaluation for models hosted on the huggingface hub and of type encoder, decoder or gliner type models.
232
- If your model needs a custom implementation, follow the steps outlined in the [medics_ner](https://github.com/WadoodAbdul/medics_ner/blob/master/docs/custom_model_implementation.md) repo or reach out to our team!
233
 
234
 
235
  ### Fields Explanation
 
110
  $$ f1score = 2 * (Prec * Rec) / (Prec + Rec)$$
111
 
112
  Note:
113
+ 1. Span-based approach here is equivalent to the 'Span Based Evaluation with Partial Overlap' in [NER Metrics Showdown!](https://huggingface.co/spaces/wadood/ner_evaluation_metrics) and is equivalent to Partial Match ("Type") in the nervaluate python package.
114
+ 2. Token-based approach here is equivalent to the 'Token Based Evaluation With Macro Average' in [NER Metrics Showdown!](https://huggingface.co/spaces/wadood/ner_evaluation_metrics)
115
 
116
+ Additional examples can be tested on the [NER Metrics Showdown!](https://huggingface.co/spaces/wadood/ner_evaluation_metrics) huggingface space.
117
 
118
  ## Datasets
119
  The following datasets (test splits only) have been included in the evaluation.
 
229
  EVALUATION_QUEUE_TEXT = """
230
 
231
  Currently, the benchmark supports evaluation for models hosted on the huggingface hub and of type encoder, decoder or gliner type models.
232
+ If your model needs a custom implementation, follow the steps outlined in the [clinical_ner_benchmark](https://github.com/WadoodAbdul/clinical_ner_benchmark/blob/e66eb566f34e33c4b6c3e5258ac85aba42ec7894/docs/custom_model_implementation.md) repo or reach out to our team!
233
 
234
 
235
  ### Fields Explanation