Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Sub-tasks:
multi-class-classification
Languages:
Portuguese
Size:
1K - 10K
ArXiv:
License:
joelniklaus
commited on
Commit
•
0363a0b
1
Parent(s):
b4bdedb
added jsonl files
Browse files- .gitattributes +6 -0
- README.md +251 -0
- convert_to_hf_dataset.py +132 -0
- judgment/test.jsonl +3 -0
- judgment/train.jsonl +3 -0
- judgment/validation.jsonl +3 -0
- unanimity/test.jsonl +3 -0
- unanimity/train.jsonl +3 -0
- unanimity/validation.jsonl +3 -0
.gitattributes
CHANGED
@@ -35,3 +35,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
35 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
*.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
+
judgment/test.jsonl filter=lfs diff=lfs merge=lfs -text
|
39 |
+
judgment/train.jsonl filter=lfs diff=lfs merge=lfs -text
|
40 |
+
judgment/validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
41 |
+
unanimity/test.jsonl filter=lfs diff=lfs merge=lfs -text
|
42 |
+
unanimity/train.jsonl filter=lfs diff=lfs merge=lfs -text
|
43 |
+
unanimity/validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- found
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- pt
|
8 |
+
licenses:
|
9 |
+
- 'other-This data set should be used according to Brazilian law. '
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: predicting-brazilian-court-decisions
|
13 |
+
size_categories:
|
14 |
+
- 1K<n<10K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- text-classification
|
19 |
+
task_ids:
|
20 |
+
- multi-class-classification
|
21 |
+
---
|
22 |
+
|
23 |
+
# Dataset Card for predicting-brazilian-court-decisions
|
24 |
+
|
25 |
+
## Table of Contents
|
26 |
+
|
27 |
+
- [Table of Contents](#table-of-contents)
|
28 |
+
- [Dataset Description](#dataset-description)
|
29 |
+
- [Dataset Summary](#dataset-summary)
|
30 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
+
- [Languages](#languages)
|
32 |
+
- [Dataset Structure](#dataset-structure)
|
33 |
+
- [Data Instances](#data-instances)
|
34 |
+
- [Data Fields](#data-fields)
|
35 |
+
- [Data Splits](#data-splits)
|
36 |
+
- [Dataset Creation](#dataset-creation)
|
37 |
+
- [Curation Rationale](#curation-rationale)
|
38 |
+
- [Source Data](#source-data)
|
39 |
+
- [Annotations](#annotations)
|
40 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
+
- [Discussion of Biases](#discussion-of-biases)
|
44 |
+
- [Other Known Limitations](#other-known-limitations)
|
45 |
+
- [Additional Information](#additional-information)
|
46 |
+
- [Dataset Curators](#dataset-curators)
|
47 |
+
- [Licensing Information](#licensing-information)
|
48 |
+
- [Citation Information](#citation-information)
|
49 |
+
- [Contributions](#contributions)
|
50 |
+
|
51 |
+
## Dataset Description
|
52 |
+
|
53 |
+
- **Homepage:**
|
54 |
+
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
|
55 |
+
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
|
56 |
+
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
|
57 |
+
- **Leaderboard:**
|
58 |
+
- **Point of Contact:** [Joel Niklaus]([email protected])
|
59 |
+
|
60 |
+
### Dataset Summary
|
61 |
+
|
62 |
+
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
|
63 |
+
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
|
64 |
+
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
|
65 |
+
supports the task of Legal Judgment Prediction.
|
66 |
+
|
67 |
+
### Supported Tasks and Leaderboards
|
68 |
+
|
69 |
+
Legal Judgment Prediction
|
70 |
+
|
71 |
+
### Languages
|
72 |
+
|
73 |
+
Brazilian Portuguese
|
74 |
+
|
75 |
+
## Dataset Structure
|
76 |
+
|
77 |
+
### Data Instances
|
78 |
+
|
79 |
+
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
|
80 |
+
|
81 |
+
### Data Fields
|
82 |
+
|
83 |
+
The dataset contains the following fields:
|
84 |
+
|
85 |
+
- `process_number`: A number assigned to the decision by the court
|
86 |
+
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
|
87 |
+
Tribunal Pleno', 'Seção Especializada Cível'
|
88 |
+
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
|
89 |
+
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
|
90 |
+
the last months has been scraped.
|
91 |
+
- `judge_relator`: Judicial panel
|
92 |
+
- `ementa_text`: Summary of the court decision
|
93 |
+
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
|
94 |
+
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
|
95 |
+
- `judgment_text`: The text used for determining the judgment label
|
96 |
+
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
|
97 |
+
- `no`: The appeal was denied
|
98 |
+
- `partial`: For partially favourable decisions
|
99 |
+
- `yes`: For fully favourable decisions
|
100 |
+
- removed labels (present in the original dataset):
|
101 |
+
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
|
102 |
+
and not Court B.
|
103 |
+
- `not-cognized`: The appeal was not accepted to be judged by the court
|
104 |
+
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
|
105 |
+
case for instance.
|
106 |
+
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
|
107 |
+
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
|
108 |
+
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
|
109 |
+
2019).
|
110 |
+
|
111 |
+
### Data Splits
|
112 |
+
|
113 |
+
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
|
114 |
+
|
115 |
+
There exist two configurations: judgment and unanimity
|
116 |
+
|
117 |
+
#### Judgment
|
118 |
+
Label Distribution
|
119 |
+
|
120 |
+
| judgment | train | validation | test |
|
121 |
+
|:----------|---------:|-----------:|--------:|
|
122 |
+
| no | 1960 | 221 | 234 |
|
123 |
+
| partial | 677 | 96 | 93 |
|
124 |
+
| yes | 597 | 87 | 78 |
|
125 |
+
| **total** | **3234** | **404** | **405** |
|
126 |
+
|
127 |
+
#### Unanimity
|
128 |
+
|
129 |
+
In this configuration, all cases that have `not_determined` as `unanimity_label` are removed. The splits are not changed other than that.
|
130 |
+
|
131 |
+
Label Distribution
|
132 |
+
|
133 |
+
| unanimity_label | train | validation | test |
|
134 |
+
|:----------------|---------:|-----------:|--------:|
|
135 |
+
| unanimity | 1681 | 205 | 200 |
|
136 |
+
| not-unanimity | 34 | 6 | 4 |
|
137 |
+
| **total** | **1715** | **211** | **204** |
|
138 |
+
|
139 |
+
## Dataset Creation
|
140 |
+
|
141 |
+
### Curation Rationale
|
142 |
+
|
143 |
+
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
|
144 |
+
also able to predict whether the decision will be unanimous.
|
145 |
+
|
146 |
+
### Source Data
|
147 |
+
|
148 |
+
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
|
149 |
+
|
150 |
+
#### Initial Data Collection and Normalization
|
151 |
+
|
152 |
+
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
|
153 |
+
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
|
154 |
+
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
|
155 |
+
format […].”* (Lage-Freitas et al., 2022)
|
156 |
+
|
157 |
+
#### Who are the source language producers?
|
158 |
+
|
159 |
+
The source language producer are presumably attorneys, judges, and other legal professionals.
|
160 |
+
|
161 |
+
### Annotations
|
162 |
+
|
163 |
+
#### Annotation process
|
164 |
+
|
165 |
+
The dataset was not annotated.
|
166 |
+
|
167 |
+
#### Who are the annotators?
|
168 |
+
|
169 |
+
[More Information Needed]
|
170 |
+
|
171 |
+
### Personal and Sensitive Information
|
172 |
+
|
173 |
+
The court decisions might contain sensitive information about individuals.
|
174 |
+
|
175 |
+
## Considerations for Using the Data
|
176 |
+
|
177 |
+
### Social Impact of Dataset
|
178 |
+
|
179 |
+
[More Information Needed]
|
180 |
+
|
181 |
+
### Discussion of Biases
|
182 |
+
|
183 |
+
[More Information Needed]
|
184 |
+
|
185 |
+
### Other Known Limitations
|
186 |
+
|
187 |
+
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
|
188 |
+
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
|
189 |
+
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
|
190 |
+
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
|
191 |
+
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
|
192 |
+
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
|
193 |
+
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
|
194 |
+
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
|
195 |
+
|
196 |
+
## Additional Information
|
197 |
+
|
198 |
+
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
|
199 |
+
|
200 |
+
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
|
201 |
+
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
|
202 |
+
decisions are compiled in Agreement reports named *Acóordãos*."
|
203 |
+
|
204 |
+
### Dataset Curators
|
205 |
+
|
206 |
+
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
|
207 |
+
Information*. Additional changes were made by Joel Niklaus ([Email]([email protected])
|
208 |
+
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email]([email protected])
|
209 |
+
; [Github](https://github.com/kapllan)).
|
210 |
+
|
211 |
+
### Licensing Information
|
212 |
+
|
213 |
+
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
|
214 |
+
Brazilian law.
|
215 |
+
|
216 |
+
### Citation Information
|
217 |
+
|
218 |
+
```
|
219 |
+
@misc{https://doi.org/10.48550/arxiv.1905.10348,
|
220 |
+
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
|
221 |
+
doi = {10.48550/ARXIV.1905.10348},
|
222 |
+
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
|
223 |
+
publisher = {arXiv},
|
224 |
+
title = {{Predicting Brazilian court decisions}},
|
225 |
+
url = {https://arxiv.org/abs/1905.10348},
|
226 |
+
year = {2019}
|
227 |
+
}
|
228 |
+
```
|
229 |
+
|
230 |
+
```
|
231 |
+
@article{Lage-Freitas2022,
|
232 |
+
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
|
233 |
+
doi = {10.7717/peerj-cs.904},
|
234 |
+
issn = {2376-5992},
|
235 |
+
journal = {PeerJ. Computer science},
|
236 |
+
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
|
237 |
+
language = {eng},
|
238 |
+
month = {mar},
|
239 |
+
pages = {e904--e904},
|
240 |
+
publisher = {PeerJ Inc.},
|
241 |
+
title = {{Predicting Brazilian Court Decisions}},
|
242 |
+
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
|
243 |
+
volume = {8},
|
244 |
+
year = {2022}
|
245 |
+
}
|
246 |
+
```
|
247 |
+
|
248 |
+
### Contributions
|
249 |
+
|
250 |
+
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
|
251 |
+
dataset.
|
convert_to_hf_dataset.py
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
|
3 |
+
import numpy as np
|
4 |
+
import pandas as pd
|
5 |
+
|
6 |
+
"""
|
7 |
+
Dataset url: https://github.com/lagefreitas/predicting-brazilian-court-decisions/blob/main/dataset.zip
|
8 |
+
Paper url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/
|
9 |
+
|
10 |
+
There are no splits available ==> Make random split ourselves
|
11 |
+
|
12 |
+
"""
|
13 |
+
|
14 |
+
pd.set_option('display.max_colwidth', None)
|
15 |
+
pd.set_option('display.max_columns', None)
|
16 |
+
|
17 |
+
|
18 |
+
def perform_original_preprocessing():
|
19 |
+
# Original Preprocessing from: https://github.com/lagefreitas/predicting-brazilian-court-decisions/blob/main/predicting-brazilian-court-decisions.py#L81
|
20 |
+
# Loading the labeled decisions
|
21 |
+
data = pd.read_csv("dataset.csv", sep='<=>', header=0)
|
22 |
+
print('data.shape=' + str(data.shape) + ' full data set')
|
23 |
+
# Removing NA values
|
24 |
+
data = data.dropna(subset=[data.columns[9]]) # decision_description
|
25 |
+
data = data.dropna(subset=[data.columns[11]]) # decision_label
|
26 |
+
print('data.shape=' + str(data.shape) + ' dropna')
|
27 |
+
# Removing duplicated samples
|
28 |
+
data = data.drop_duplicates(subset=[data.columns[1]]) # process_number
|
29 |
+
print('data.shape=' + str(data.shape) + ' removed duplicated samples by process_number')
|
30 |
+
data = data.drop_duplicates(subset=[data.columns[9]]) # decision_description
|
31 |
+
print('data.shape=' + str(data.shape) + ' removed duplicated samples by decision_description')
|
32 |
+
# Removing not relevant decision labels and decision not properly labeled
|
33 |
+
data = data.query('decision_label != "conflito-competencia"')
|
34 |
+
print('data.shape=' + str(data.shape) + ' removed decisions labeled as conflito-competencia')
|
35 |
+
data = data.query('decision_label != "prejudicada"')
|
36 |
+
print('data.shape=' + str(data.shape) + ' removed decisions labeled as prejudicada')
|
37 |
+
data = data.query('decision_label != "not-cognized"')
|
38 |
+
print('data.shape=' + str(data.shape) + ' removed decisions labeled as not-cognized')
|
39 |
+
data_no = data.query('decision_label == "no"')
|
40 |
+
print('data_no.shape=' + str(data_no.shape))
|
41 |
+
data_yes = data.query('decision_label == "yes"')
|
42 |
+
print('data_yes.shape=' + str(data_yes.shape))
|
43 |
+
data_partial = data.query('decision_label == "partial"')
|
44 |
+
print('data_partial.shape=' + str(data_partial.shape))
|
45 |
+
# Merging decisions whose labels are yes, no, and partial to build the final data set
|
46 |
+
data_merged = data_no.merge(data_yes, how='outer')
|
47 |
+
data = data_merged.merge(data_partial, how='outer')
|
48 |
+
print('data.shape=' + str(data.shape) + ' merged decisions whose labels are yes, no, and partial')
|
49 |
+
# Removing decision_description and decision_labels whose values are -1 and -2
|
50 |
+
indexNames = data[(data['decision_description'] == str(-1)) | (data['decision_description'] == str(-2)) | (
|
51 |
+
data['decision_label'] == str(-1)) | (data['decision_label'] == str(-2))].index
|
52 |
+
data.drop(indexNames, inplace=True)
|
53 |
+
print('data.shape=' + str(data.shape) + ' removed -1 and -2 decision descriptions and labels')
|
54 |
+
|
55 |
+
data.to_csv("dataset_processed_original.csv", index=False)
|
56 |
+
|
57 |
+
|
58 |
+
def perform_additional_processing():
|
59 |
+
df = pd.read_csv("dataset_processed_original.csv")
|
60 |
+
|
61 |
+
# remove strange " characters sometimes occurring in the beginning and at the end of a line
|
62 |
+
df.ementa_filepath = df.ementa_filepath.str.replace('^"', '')
|
63 |
+
df.decision_unanimity = df.decision_unanimity.str.replace('"$', '')
|
64 |
+
|
65 |
+
# removing process_type and judgment_date, since they are the same everywhere (-)
|
66 |
+
# decisions only contains 'None', nan and '-2'
|
67 |
+
# ementa_filepath refers to the name of file in the filesystem that we created when we scraped the data from the Court. It is temporary data and can be removed
|
68 |
+
# decision_description = ementa_text - decision_text - decision_unanimity_text
|
69 |
+
df = df.drop(['process_type', 'judgment_date', 'decisions', 'ementa_filepath'], axis=1)
|
70 |
+
|
71 |
+
# some rows are somehow not read correctly. With this, we can filter them
|
72 |
+
df = df[df.decision_text.str.len() > 1]
|
73 |
+
|
74 |
+
# rename "-2" to more descriptive name ==> -2 means, that they were not able to determine it
|
75 |
+
df.decision_unanimity = df.decision_unanimity.replace('-2', 'not_determined')
|
76 |
+
|
77 |
+
# rename cols for more clarity
|
78 |
+
df = df.rename(columns={"decision_unanimity": "unanimity_label"})
|
79 |
+
df = df.rename(columns={"decision_unanimity_text": "unanimity_text"})
|
80 |
+
df = df.rename(columns={"decision_text": "judgment_text"})
|
81 |
+
df = df.rename(columns={"decision_label": "judgment_label"})
|
82 |
+
|
83 |
+
df.to_csv("dataset_processed_additional.csv", index=False)
|
84 |
+
|
85 |
+
return df
|
86 |
+
|
87 |
+
|
88 |
+
perform_original_preprocessing()
|
89 |
+
df = perform_additional_processing()
|
90 |
+
|
91 |
+
# perform random split 80% train (3234), 10% validation (404), 10% test (405)
|
92 |
+
train, validation, test = np.split(df.sample(frac=1, random_state=42), [int(.8 * len(df)), int(.9 * len(df))])
|
93 |
+
|
94 |
+
|
95 |
+
def save_splits_to_jsonl(config_name):
|
96 |
+
# save to jsonl files for huggingface
|
97 |
+
if config_name: os.makedirs(config_name, exist_ok=True)
|
98 |
+
train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
|
99 |
+
validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
|
100 |
+
test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
|
101 |
+
|
102 |
+
|
103 |
+
def print_split_table_single_label(train, validation, test, label_name):
|
104 |
+
train_counts = train[label_name].value_counts().to_frame().rename(columns={label_name: "train"})
|
105 |
+
validation_counts = validation[label_name].value_counts().to_frame().rename(columns={label_name: "validation"})
|
106 |
+
test_counts = test[label_name].value_counts().to_frame().rename(columns={label_name: "test"})
|
107 |
+
|
108 |
+
table = train_counts.join(validation_counts)
|
109 |
+
table = table.join(test_counts)
|
110 |
+
table[label_name] = table.index
|
111 |
+
total_row = {label_name: "total",
|
112 |
+
"train": len(train.index),
|
113 |
+
"validation": len(validation.index),
|
114 |
+
"test": len(test.index)}
|
115 |
+
table = table.append(total_row, ignore_index=True)
|
116 |
+
table = table[[label_name, "train", "validation", "test"]] # reorder columns
|
117 |
+
print(table.to_markdown(index=False))
|
118 |
+
|
119 |
+
|
120 |
+
save_splits_to_jsonl("judgment")
|
121 |
+
|
122 |
+
print_split_table_single_label(train, validation, test, "judgment_label")
|
123 |
+
|
124 |
+
# create second config by filtering out rows with unanimity label == not_determined, while keeping the same splits
|
125 |
+
train = train[train.unanimity_label != "not_determined"]
|
126 |
+
validation = validation[validation.unanimity_label != "not_determined"]
|
127 |
+
test = test[test.unanimity_label != "not_determined"]
|
128 |
+
|
129 |
+
print_split_table_single_label(train, validation, test, "unanimity_label")
|
130 |
+
|
131 |
+
# it is a very small dataset and very imbalanced (only very few not-unanimity labels)
|
132 |
+
save_splits_to_jsonl("unanimity")
|
judgment/test.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d6c0146d6e7548c509863241dc6fc95da4ca7ebd25581d10fbe3ec556f7357ad
|
3 |
+
size 841329
|
judgment/train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3578449d16081bf91b9dcaf8a4f08dec069386ac035e8572d24199789a9313db
|
3 |
+
size 6750572
|
judgment/validation.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:804396f74bb430d57554310679c019ac6e9bbfcc56066e1ff9a4608c4d94a4bb
|
3 |
+
size 852159
|
unanimity/test.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dd5e8cdc60b59f652a72866c1fa7e3a162e30f52ff6b7a90144999d69ba679c4
|
3 |
+
size 465080
|
unanimity/train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:23ed9d12ccfc215087919230ea43cbf17e267a726f00ea4e6b040b69797c4368
|
3 |
+
size 3781643
|
unanimity/validation.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7b7bdb27ac581ded9e0d8841254937b514428435d4e6820ae045a6ed9b9d2fc6
|
3 |
+
size 475936
|