“WadoodAbdul”
commited on
Commit
·
7d35d7a
1
Parent(s):
b445977
updated basic info
Browse files- src/about.py +18 -3
src/about.py
CHANGED
@@ -7,6 +7,7 @@ class Task:
|
|
7 |
benchmark: str
|
8 |
metric: str
|
9 |
col_name: str
|
|
|
10 |
|
11 |
|
12 |
# Select your tasks here
|
@@ -28,16 +29,30 @@ NUM_FEWSHOT = 0 # Change with your few shot
|
|
28 |
|
29 |
|
30 |
# Your leaderboard name
|
31 |
-
TITLE = """<h1 align="center" id="space-title">
|
32 |
|
33 |
# What does your leaderboard evaluate?
|
34 |
INTRODUCTION_TEXT = """
|
35 |
-
Intro text
|
36 |
"""
|
37 |
|
38 |
# Which evaluations are you running? how can people reproduce what you have?
|
39 |
LLM_BENCHMARKS_TEXT = f"""
|
40 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Reproducibility
|
43 |
To reproduce our results, here is the commands you can run:
|
|
|
7 |
benchmark: str
|
8 |
metric: str
|
9 |
col_name: str
|
10 |
+
|
11 |
|
12 |
|
13 |
# Select your tasks here
|
|
|
29 |
|
30 |
|
31 |
# Your leaderboard name
|
32 |
+
TITLE = """<h1 align="center" id="space-title">MEDICS NER Leaderboard</h1>"""
|
33 |
|
34 |
# What does your leaderboard evaluate?
|
35 |
INTRODUCTION_TEXT = """
|
|
|
36 |
"""
|
37 |
|
38 |
# Which evaluations are you running? how can people reproduce what you have?
|
39 |
LLM_BENCHMARKS_TEXT = f"""
|
40 |
+
## About
|
41 |
+
Named Entity Recogintion is a significant task for information extraction. However, we do not have a open leaderboard to rank the NER capabilities of models in the Bio-Medical domain.
|
42 |
+
|
43 |
+
MEDICS NER leaderboard aims to solve this by quantifying NER performance on open-source datasets.
|
44 |
+
To keep the evaluation widely relevant the entity types in the dataset are mapped to broader M2 types. More information on this mapping can be found here - M2-DATASETS-ARTICLE-LINK
|
45 |
+
|
46 |
+
### Tasks
|
47 |
+
📈 We evaluate models on X key datasets, encompassing Y entity types
|
48 |
+
- NCBI - INFO
|
49 |
+
- CHIA
|
50 |
+
- BIORED
|
51 |
+
- BC5CD
|
52 |
+
|
53 |
+
### Evaluation Metrics
|
54 |
+
|
55 |
+
|
56 |
|
57 |
## Reproducibility
|
58 |
To reproduce our results, here is the commands you can run:
|