Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Size:
10K - 100K
License:
joelniklaus
commited on
Commit
•
796f4bd
1
Parent(s):
299c7c7
added first version of mapa dataset
Browse files- .gitattributes +3 -0
- README.md +273 -0
- convert_to_hf_dataset.py +190 -0
- test.jsonl +3 -0
- train.jsonl +3 -0
- validation.jsonl +3 -0
.gitattributes
CHANGED
@@ -39,3 +39,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
39 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
40 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
41 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
39 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
40 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
41 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
42 |
+
test.jsonl filter=lfs diff=lfs merge=lfs -text
|
43 |
+
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
44 |
+
validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,273 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- other
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv
|
8 |
+
license:
|
9 |
+
- CC-BY-4.0
|
10 |
+
multilinguality:
|
11 |
+
- multilingual
|
12 |
+
paperswithcode_id: null
|
13 |
+
pretty_name: Spanish Datasets for Sensitive Entity Detection in the Legal Domain
|
14 |
+
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- token-classification
|
20 |
+
task_ids:
|
21 |
+
- named-entity-recognition
|
22 |
+
- named entity recognition and classification (NERC)
|
23 |
+
|
24 |
+
---
|
25 |
+
|
26 |
+
# Dataset Card for Spanish Datasets for Sensitive Entity Detection in the Legal Domain
|
27 |
+
|
28 |
+
## Table of Contents
|
29 |
+
|
30 |
+
- [Table of Contents](#table-of-contents)
|
31 |
+
- [Dataset Description](#dataset-description)
|
32 |
+
- [Dataset Summary](#dataset-summary)
|
33 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
34 |
+
- [Languages](#languages)
|
35 |
+
- [Dataset Structure](#dataset-structure)
|
36 |
+
- [Data Instances](#data-instances)
|
37 |
+
- [Data Fields](#data-fields)
|
38 |
+
- [Data Splits](#data-splits)
|
39 |
+
- [Dataset Creation](#dataset-creation)
|
40 |
+
- [Curation Rationale](#curation-rationale)
|
41 |
+
- [Source Data](#source-data)
|
42 |
+
- [Annotations](#annotations)
|
43 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
44 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
45 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
46 |
+
- [Discussion of Biases](#discussion-of-biases)
|
47 |
+
- [Other Known Limitations](#other-known-limitations)
|
48 |
+
- [Additional Information](#additional-information)
|
49 |
+
- [Dataset Curators](#dataset-curators)
|
50 |
+
- [Licensing Information](#licensing-information)
|
51 |
+
- [Citation Information](#citation-information)
|
52 |
+
- [Contributions](#contributions)
|
53 |
+
|
54 |
+
## Dataset Description
|
55 |
+
|
56 |
+
- **Homepage:**
|
57 |
+
- **
|
58 |
+
Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36)
|
59 |
+
- **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive
|
60 |
+
Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June,
|
61 |
+
3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf
|
62 |
+
- **Leaderboard:**
|
63 |
+
- **Point of Contact:** [Joel Niklaus]([email protected])
|
64 |
+
|
65 |
+
### Dataset Summary
|
66 |
+
|
67 |
+
The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court
|
68 |
+
decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated
|
69 |
+
for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two
|
70 |
+
annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification.
|
71 |
+
|
72 |
+
### Supported Tasks and Leaderboards
|
73 |
+
|
74 |
+
The dataset supports the task of Named Entity Recognition and Classification (NERC).
|
75 |
+
|
76 |
+
### Languages
|
77 |
+
|
78 |
+
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv
|
79 |
+
|
80 |
+
## Dataset Structure
|
81 |
+
|
82 |
+
### Data Instances
|
83 |
+
|
84 |
+
The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are
|
85 |
+
non-overlapping.
|
86 |
+
|
87 |
+
### Data Fields
|
88 |
+
|
89 |
+
For the annotation the documents have been split into sentences. The annotations has been done on the token level.
|
90 |
+
The files contain the following data fields
|
91 |
+
|
92 |
+
- `language`: language of the sentence
|
93 |
+
- `type`: The document type of the sentence. Currently, only EUR-LEX is supported.
|
94 |
+
- `file_name`: The document file name the sentence belongs to.
|
95 |
+
- `sentence_number`: The number of the sentence inside its document.
|
96 |
+
- `tokens`: The list of tokens in the sentence.
|
97 |
+
- `coarse_grained`: The coarse-grained annotations for each token
|
98 |
+
- `fine_grained`: The fine-grained annotations for each token
|
99 |
+
|
100 |
+
|
101 |
+
As previously stated, the annotation has been conducted on a global and a more fine-grained level.
|
102 |
+
|
103 |
+
The tagset used for the global and the fine-grained named entities is the following:
|
104 |
+
|
105 |
+
- Address
|
106 |
+
- Building
|
107 |
+
- City
|
108 |
+
- Country
|
109 |
+
- Place
|
110 |
+
- Postcode
|
111 |
+
- Street
|
112 |
+
- Territory
|
113 |
+
- Amount
|
114 |
+
- Unit
|
115 |
+
- Value
|
116 |
+
- Date
|
117 |
+
- Year
|
118 |
+
- Standard Abbreviation
|
119 |
+
- Month
|
120 |
+
- Day of the Week
|
121 |
+
- Day
|
122 |
+
- Calender Event
|
123 |
+
- Person
|
124 |
+
- Age
|
125 |
+
- Email
|
126 |
+
- Ethnic Category
|
127 |
+
- Family Name
|
128 |
+
- Financial
|
129 |
+
- Given Name – Female
|
130 |
+
- Given Name – Male
|
131 |
+
- Health Insurance Number
|
132 |
+
- ID Document Number
|
133 |
+
- Initial Name
|
134 |
+
- Marital Status
|
135 |
+
- Medical Record Number
|
136 |
+
- Nationality
|
137 |
+
- Profession
|
138 |
+
- Role
|
139 |
+
- Social Security Number
|
140 |
+
- Title
|
141 |
+
- Url
|
142 |
+
- Organisation
|
143 |
+
- Time
|
144 |
+
- Vehicle
|
145 |
+
- Build Year
|
146 |
+
- Colour
|
147 |
+
- License Plate Number
|
148 |
+
- Model
|
149 |
+
- Type
|
150 |
+
|
151 |
+
### Data Splits
|
152 |
+
|
153 |
+
Splits created by Joel Niklaus.
|
154 |
+
|
155 |
+
|
156 |
+
| language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences |
|
157 |
+
|:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:|
|
158 |
+
| bg | 9 | 1 | 2 | 1411 | 166 | 560 |
|
159 |
+
| cs | 9 | 1 | 2 | 1464 | 176 | 563 |
|
160 |
+
| da | 9 | 1 | 2 | 1455 | 164 | 550 |
|
161 |
+
| de | 9 | 1 | 2 | 1457 | 166 | 558 |
|
162 |
+
| el | 9 | 1 | 2 | 1529 | 174 | 584 |
|
163 |
+
| en | 9 | 1 | 2 | 893 | 98 | 408 |
|
164 |
+
| es | 7 | 1 | 1 | 806 | 248 | 155 |
|
165 |
+
| et | 9 | 1 | 2 | 1391 | 163 | 516 |
|
166 |
+
| fi | 9 | 1 | 2 | 1398 | 187 | 531 |
|
167 |
+
| fr | 9 | 1 | 2 | 1297 | 97 | 490 |
|
168 |
+
| ga | 9 | 1 | 2 | 1383 | 165 | 515 |
|
169 |
+
| hu | 9 | 1 | 2 | 1390 | 171 | 525 |
|
170 |
+
| it | 9 | 1 | 2 | 1411 | 162 | 550 |
|
171 |
+
| lt | 9 | 1 | 2 | 1413 | 173 | 548 |
|
172 |
+
| lv | 9 | 1 | 2 | 1383 | 167 | 553 |
|
173 |
+
| mt | 9 | 1 | 2 | 937 | 93 | 442 |
|
174 |
+
| nl | 9 | 1 | 2 | 1391 | 164 | 530 |
|
175 |
+
| pt | 9 | 1 | 2 | 1086 | 105 | 390 |
|
176 |
+
| ro | 9 | 1 | 2 | 1480 | 175 | 557 |
|
177 |
+
| sk | 9 | 1 | 2 | 1395 | 165 | 526 |
|
178 |
+
| sv | 9 | 1 | 2 | 1453 | 175 | 539 |
|
179 |
+
|
180 |
+
## Dataset Creation
|
181 |
+
|
182 |
+
### Curation Rationale
|
183 |
+
|
184 |
+
*„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the
|
185 |
+
present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and
|
186 |
+
evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted
|
187 |
+
anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022)
|
188 |
+
|
189 |
+
### Source Data
|
190 |
+
|
191 |
+
#### Initial Data Collection and Normalization
|
192 |
+
|
193 |
+
The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further
|
194 |
+
information on the data collection process are given in de Gibert Bonet et al. (2022).
|
195 |
+
|
196 |
+
#### Who are the source language producers?
|
197 |
+
|
198 |
+
The source language producers are presumably lawyers.
|
199 |
+
|
200 |
+
### Annotations
|
201 |
+
|
202 |
+
#### Annotation process
|
203 |
+
|
204 |
+
*"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme
|
205 |
+
described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...)
|
206 |
+
and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex,
|
207 |
+
CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using
|
208 |
+
INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert
|
209 |
+
Bonet et al., 2022)
|
210 |
+
|
211 |
+
#### Who are the annotators?
|
212 |
+
|
213 |
+
Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022).
|
214 |
+
|
215 |
+
### Personal and Sensitive Information
|
216 |
+
|
217 |
+
[More Information Needed]
|
218 |
+
|
219 |
+
## Considerations for Using the Data
|
220 |
+
|
221 |
+
### Social Impact of Dataset
|
222 |
+
|
223 |
+
[More Information Needed]
|
224 |
+
|
225 |
+
### Discussion of Biases
|
226 |
+
|
227 |
+
[More Information Needed]
|
228 |
+
|
229 |
+
### Other Known Limitations
|
230 |
+
|
231 |
+
Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al.
|
232 |
+
(2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available.
|
233 |
+
|
234 |
+
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
|
235 |
+
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
|
236 |
+
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
|
237 |
+
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
|
238 |
+
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
|
239 |
+
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
|
240 |
+
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
|
241 |
+
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
|
242 |
+
|
243 |
+
## Additional Information
|
244 |
+
|
245 |
+
### Dataset Curators
|
246 |
+
|
247 |
+
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
|
248 |
+
Information*. Additional changes were made by Joel Niklaus ([Email]([email protected])
|
249 |
+
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email]([email protected])
|
250 |
+
; [Github](https://github.com/kapllan)).
|
251 |
+
|
252 |
+
### Licensing Information
|
253 |
+
|
254 |
+
[Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/)
|
255 |
+
|
256 |
+
### Citation Information
|
257 |
+
|
258 |
+
```
|
259 |
+
@article{DeGibertBonet2022,
|
260 |
+
author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite},
|
261 |
+
journal = {Proceedings of the Language Resources and Evaluation Conference},
|
262 |
+
number = {June},
|
263 |
+
pages = {3751--3760},
|
264 |
+
title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}},
|
265 |
+
url = {https://aclanthology.org/2022.lrec-1.400},
|
266 |
+
year = {2022}
|
267 |
+
}
|
268 |
+
```
|
269 |
+
|
270 |
+
### Contributions
|
271 |
+
|
272 |
+
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this
|
273 |
+
dataset.
|
convert_to_hf_dataset.py
ADDED
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from glob import glob
|
3 |
+
from pathlib import Path
|
4 |
+
|
5 |
+
import numpy as np
|
6 |
+
import pandas as pd
|
7 |
+
|
8 |
+
from web_anno_tsv import open_web_anno_tsv
|
9 |
+
from web_anno_tsv.web_anno_tsv import ReadException, Annotation
|
10 |
+
|
11 |
+
pd.set_option('display.max_colwidth', None)
|
12 |
+
pd.set_option('display.max_columns', None)
|
13 |
+
|
14 |
+
annotation_labels = {'ADDRESS': ['building', 'city', 'country', 'place', 'postcode', 'street', 'territory'],
|
15 |
+
'AMOUNT': ['unit', 'value'],
|
16 |
+
'DATE': ['year', 'standard abbreviation', 'month', 'day of the week', 'day', 'calender event'],
|
17 |
+
'PERSON': ['age', 'email', 'ethnic category', 'family name', 'financial', 'given name – female',
|
18 |
+
'given name – male',
|
19 |
+
'health insurance number', 'id document number', 'initial name', 'marital status',
|
20 |
+
'medical record number',
|
21 |
+
'nationality', 'profession', 'role', 'social security number', 'title', 'url'],
|
22 |
+
'ORGANISATION': [],
|
23 |
+
'TIME': [],
|
24 |
+
'VEHICLE': ['build year', 'colour', 'license plate number', 'model', 'type']}
|
25 |
+
|
26 |
+
# make all coarse_grained upper case and all fine_grained lower case
|
27 |
+
annotation_labels = {key.upper(): [label.lower() for label in labels] for key, labels in annotation_labels.items()}
|
28 |
+
print(annotation_labels)
|
29 |
+
|
30 |
+
base_path = Path("extracted")
|
31 |
+
|
32 |
+
# TODO future work can add these datasets too to make it larger
|
33 |
+
special_paths = {
|
34 |
+
"EL": ["EL/ANNOTATED_DATA/LEGAL/AREIOSPAGOS1/annotated/full_dataset"],
|
35 |
+
"EN": ["EN/ANNOTATED_DATA/ADMINISTRATIVE-LEGAL/annotated/full_dataset"],
|
36 |
+
"FR": ["FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Civil",
|
37 |
+
"FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Commercial",
|
38 |
+
"FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION1/annotated/full_dataset/Criminal",
|
39 |
+
"FR/ANNOTATED_DATA/LEGAL/COUR_CASSATION2/annotated/full_dataset",
|
40 |
+
"FR/ANNOTATED_DATA/MEDICAL/CAS1/annotated/full_dataset"],
|
41 |
+
"IT": ["IT/ANNOTATED_DATA/Corte_Suprema_di_Cassazione/annotated"],
|
42 |
+
"MT": ["MT/ANNOTATED_DATA/ADMINISTRATIVE/annotated/full_dataset",
|
43 |
+
"MT/ANNOTATED_DATA/GENERAL_NEWS/News_1/annotated/full_dataset",
|
44 |
+
"MT/ANNOTATED_DATA/LEGAL/Jurisprudence_1/annotated/full_dataset"],
|
45 |
+
}
|
46 |
+
|
47 |
+
|
48 |
+
def get_path(language):
|
49 |
+
return base_path / language / "ANNOTATED_DATA/EUR_LEX/annotated/full_dataset"
|
50 |
+
|
51 |
+
|
52 |
+
def get_coarse_grained_for_fine_grained(label):
|
53 |
+
for coarse_grained, fine_grained_set in annotation_labels.items():
|
54 |
+
if label in fine_grained_set:
|
55 |
+
return coarse_grained
|
56 |
+
return None # raise ValueError(f"Did not find fine_grained label {label}")
|
57 |
+
|
58 |
+
|
59 |
+
def is_fine_grained(label):
|
60 |
+
for coarse_grained, fine_grained_set in annotation_labels.items():
|
61 |
+
if label.lower() in fine_grained_set:
|
62 |
+
return True
|
63 |
+
return False
|
64 |
+
|
65 |
+
|
66 |
+
def is_coarse_grained(label):
|
67 |
+
return label.upper() in annotation_labels.keys()
|
68 |
+
|
69 |
+
|
70 |
+
class HashableAnnotation(Annotation):
|
71 |
+
def __init__(self, annotation):
|
72 |
+
super()
|
73 |
+
self.label = annotation.label
|
74 |
+
self.start = annotation.start
|
75 |
+
self.stop = annotation.stop
|
76 |
+
self.text = annotation.text
|
77 |
+
|
78 |
+
def __eq__(self, other):
|
79 |
+
return self.label == other.label and self.start == other.start and self.stop == other.stop and self.text == other.text
|
80 |
+
|
81 |
+
def __hash__(self):
|
82 |
+
return hash(('label', self.label, 'start', self.start, 'stop', self.stop, 'text', self.text))
|
83 |
+
|
84 |
+
|
85 |
+
def get_token_annotations(token, annotations):
|
86 |
+
annotations = list(dict.fromkeys([HashableAnnotation(ann) for ann in annotations])) # remove duplicate annotations
|
87 |
+
coarse_grained = "O"
|
88 |
+
fine_grained = "o"
|
89 |
+
for annotation in annotations:
|
90 |
+
label = annotation.label
|
91 |
+
# if token.start == annotation.start and token.stop == annotation.stop: # fine_grained annotation
|
92 |
+
if token.start >= annotation.start and token.stop <= annotation.stop: # course_grained annotation
|
93 |
+
# we don't support multilabel annotations for each token for simplicity.
|
94 |
+
# So when a token already has an annotation for either coarse or fine grained, we don't assign new ones.
|
95 |
+
if coarse_grained != "O" and is_coarse_grained(label):
|
96 |
+
coarse_grained = label
|
97 |
+
elif fine_grained != "o" and is_fine_grained(label):
|
98 |
+
# some DATE are mislabeled as day but it is hard to correct this. So we ignore it
|
99 |
+
fine_grained = label
|
100 |
+
|
101 |
+
return coarse_grained.upper(), fine_grained.lower()
|
102 |
+
|
103 |
+
|
104 |
+
def get_annotated_sentence(result_sentence, sentence):
|
105 |
+
result_sentence["tokens"] = []
|
106 |
+
result_sentence["coarse_grained"] = []
|
107 |
+
result_sentence["fine_grained"] = []
|
108 |
+
for k, token in enumerate(sentence.tokens):
|
109 |
+
coarse_grained, fine_grained = get_token_annotations(token, sentence.annotations)
|
110 |
+
token = token.text.replace(u'\xa0', u' ').strip() # replace non-breaking spaces
|
111 |
+
if token: # remove empty tokens (only consisted of whitespace before
|
112 |
+
result_sentence["tokens"].append(token)
|
113 |
+
result_sentence["coarse_grained"].append(coarse_grained)
|
114 |
+
result_sentence["fine_grained"].append(fine_grained)
|
115 |
+
return result_sentence
|
116 |
+
|
117 |
+
|
118 |
+
languages = sorted([Path(file).stem for file in glob(str(base_path / "*"))])
|
119 |
+
|
120 |
+
|
121 |
+
def parse_files(language):
|
122 |
+
data_path = get_path(language.upper())
|
123 |
+
result_sentences = []
|
124 |
+
not_parsable_files = 0
|
125 |
+
file_names = sorted(list(glob(str(data_path / "*.tsv"))))
|
126 |
+
for file in file_names:
|
127 |
+
try:
|
128 |
+
with open_web_anno_tsv(file) as f:
|
129 |
+
for i, sentence in enumerate(f):
|
130 |
+
result_sentence = {"language": language, "type": "EUR-LEX",
|
131 |
+
"file_name": Path(file).stem, "sentence_number": i}
|
132 |
+
result_sentence = get_annotated_sentence(result_sentence, sentence)
|
133 |
+
result_sentences.append(result_sentence)
|
134 |
+
print(f"Successfully parsed file {file}")
|
135 |
+
except ReadException as e:
|
136 |
+
print(f"Could not parse file {file}")
|
137 |
+
not_parsable_files += 1
|
138 |
+
print("Not parsable files: ", not_parsable_files)
|
139 |
+
return pd.DataFrame(result_sentences), not_parsable_files
|
140 |
+
|
141 |
+
|
142 |
+
stats = []
|
143 |
+
train_dfs, validation_dfs, test_dfs = [], [], []
|
144 |
+
for language in languages:
|
145 |
+
language = language.lower()
|
146 |
+
print(f"Parsing language {language}")
|
147 |
+
df, not_parsable_files = parse_files(language)
|
148 |
+
file_names = df.file_name.unique()
|
149 |
+
|
150 |
+
# split by file_name
|
151 |
+
num_fn = len(file_names)
|
152 |
+
train_fn, validation_fn, test_fn = np.split(np.array(file_names), [int(.8 * num_fn), int(.9 * num_fn)])
|
153 |
+
|
154 |
+
lang_train = df[df.file_name.isin(train_fn)]
|
155 |
+
lang_validation = df[df.file_name.isin(validation_fn)]
|
156 |
+
lang_test = df[df.file_name.isin(test_fn)]
|
157 |
+
|
158 |
+
train_dfs.append(lang_train)
|
159 |
+
validation_dfs.append(lang_validation)
|
160 |
+
test_dfs.append(lang_test)
|
161 |
+
|
162 |
+
lang_stats = {"language": language}
|
163 |
+
|
164 |
+
lang_stats["# train files"] = len(train_fn)
|
165 |
+
lang_stats["# validation files"] = len(validation_fn)
|
166 |
+
lang_stats["# test files"] = len(test_fn)
|
167 |
+
|
168 |
+
lang_stats["# train sentences"] = len(lang_train.index)
|
169 |
+
lang_stats["# validation sentences"] = len(lang_validation.index)
|
170 |
+
lang_stats["# test sentences"] = len(lang_test.index)
|
171 |
+
|
172 |
+
stats.append(lang_stats)
|
173 |
+
|
174 |
+
stat_df = pd.DataFrame(stats)
|
175 |
+
print(stat_df.to_markdown(index=False))
|
176 |
+
|
177 |
+
train = pd.concat(train_dfs)
|
178 |
+
validation = pd.concat(validation_dfs)
|
179 |
+
test = pd.concat(test_dfs)
|
180 |
+
|
181 |
+
# save splits
|
182 |
+
def save_splits_to_jsonl(config_name):
|
183 |
+
# save to jsonl files for huggingface
|
184 |
+
if config_name: os.makedirs(config_name, exist_ok=True)
|
185 |
+
train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
|
186 |
+
validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
|
187 |
+
test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
|
188 |
+
|
189 |
+
|
190 |
+
save_splits_to_jsonl("")
|
test.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:55e08959d49e88b52f29395b7515e8c456eddbd84592b0ea71a49326c322348f
|
3 |
+
size 7559023
|
train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ea732c32b177da86de291522d0a4236ff53f92939ad7b2762d3d8d8e4449ba6
|
3 |
+
size 21505824
|
validation.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1e8d04f631b1e15806bdc54fd01eabc35017c4645bfd524ccadfb980937b592d
|
3 |
+
size 2797014
|