Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Sub-tasks:
multi-class-classification
Languages:
Portuguese
Size:
1K - 10K
ArXiv:
License:
parquet-converter
commited on
Commit
•
14f390b
1
Parent(s):
e937c2d
Update parquet files
Browse files- .gitattributes +0 -40
- README.md +0 -252
- convert_to_hf_dataset.py +0 -132
- joelito--brazilian_court_decisions/json-test.parquet +3 -0
- joelito--brazilian_court_decisions/json-train.parquet +3 -0
- joelito--brazilian_court_decisions/json-validation.parquet +3 -0
- test.jsonl +0 -0
- train.jsonl +0 -0
- validation.jsonl +0 -0
.gitattributes
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
19 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
24 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
-
# Audio files - uncompressed
|
29 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
32 |
-
# Audio files - compressed
|
33 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
37 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
38 |
-
test.jsonl filter=lfs diff=lfs merge=lfs -text
|
39 |
-
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
40 |
-
validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,252 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- found
|
4 |
-
language_creators:
|
5 |
-
- found
|
6 |
-
language:
|
7 |
-
- pt
|
8 |
-
license:
|
9 |
-
- 'other'
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
pretty_name: predicting-brazilian-court-decisions
|
13 |
-
size_categories:
|
14 |
-
- 1K<n<10K
|
15 |
-
source_datasets:
|
16 |
-
- original
|
17 |
-
task_categories:
|
18 |
-
- text-classification
|
19 |
-
task_ids:
|
20 |
-
- multi-class-classification
|
21 |
-
---
|
22 |
-
|
23 |
-
# Dataset Card for predicting-brazilian-court-decisions
|
24 |
-
|
25 |
-
## Table of Contents
|
26 |
-
|
27 |
-
- [Table of Contents](#table-of-contents)
|
28 |
-
- [Dataset Description](#dataset-description)
|
29 |
-
- [Dataset Summary](#dataset-summary)
|
30 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
-
- [Languages](#languages)
|
32 |
-
- [Dataset Structure](#dataset-structure)
|
33 |
-
- [Data Instances](#data-instances)
|
34 |
-
- [Data Fields](#data-fields)
|
35 |
-
- [Data Splits](#data-splits)
|
36 |
-
- [Dataset Creation](#dataset-creation)
|
37 |
-
- [Curation Rationale](#curation-rationale)
|
38 |
-
- [Source Data](#source-data)
|
39 |
-
- [Annotations](#annotations)
|
40 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
-
- [Discussion of Biases](#discussion-of-biases)
|
44 |
-
- [Other Known Limitations](#other-known-limitations)
|
45 |
-
- [Additional Information](#additional-information)
|
46 |
-
- [Dataset Curators](#dataset-curators)
|
47 |
-
- [Licensing Information](#licensing-information)
|
48 |
-
- [Citation Information](#citation-information)
|
49 |
-
- [Contributions](#contributions)
|
50 |
-
|
51 |
-
## Dataset Description
|
52 |
-
|
53 |
-
- **Homepage:**
|
54 |
-
- **Repository:** https://github.com/lagefreitas/predicting-brazilian-court-decisions
|
55 |
-
- **Paper:** Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian Court
|
56 |
-
Decisions. PeerJ. Computer Science, 8, e904–e904. https://doi.org/10.7717/peerj-cs.904
|
57 |
-
- **Leaderboard:**
|
58 |
-
- **Point of Contact:** [Joel Niklaus](mailto:[email protected])
|
59 |
-
|
60 |
-
### Dataset Summary
|
61 |
-
|
62 |
-
The dataset is a collection of 4043 *Ementa* (summary) court decisions and their metadata from
|
63 |
-
the *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil). The court decisions are labeled
|
64 |
-
according to 7 categories and whether the decisions were unanimous on the part of the judges or not. The dataset
|
65 |
-
supports the task of Legal Judgment Prediction.
|
66 |
-
|
67 |
-
### Supported Tasks and Leaderboards
|
68 |
-
|
69 |
-
Legal Judgment Prediction
|
70 |
-
|
71 |
-
### Languages
|
72 |
-
|
73 |
-
Brazilian Portuguese
|
74 |
-
|
75 |
-
## Dataset Structure
|
76 |
-
|
77 |
-
### Data Instances
|
78 |
-
|
79 |
-
The file format is jsonl and three data splits are present (train, validation and test) for each configuration.
|
80 |
-
|
81 |
-
### Data Fields
|
82 |
-
|
83 |
-
The dataset contains the following fields:
|
84 |
-
|
85 |
-
- `process_number`: A number assigned to the decision by the court
|
86 |
-
- `orgao_julgador`: Judging Body: one of '1ª Câmara Cível', '2ª Câmara Cível', '3ª Câmara Cível', 'Câmara Criminal', '
|
87 |
-
Tribunal Pleno', 'Seção Especializada Cível'
|
88 |
-
- `publish_date`: The date, when the decision has been published (14/12/2018 - 03/04/2019). At that time (in 2018-2019),
|
89 |
-
the scraping script was limited and not configurable to get data based on date range. Therefore, only the data from
|
90 |
-
the last months has been scraped.
|
91 |
-
- `judge_relator`: Judicial panel
|
92 |
-
- `ementa_text`: Summary of the court decision
|
93 |
-
- `decision_description`: **Suggested input**. Corresponds to ementa_text - judgment_text - unanimity_text. Basic
|
94 |
-
statistics (number of words): mean: 119, median: 88, min: 12, max: 1400
|
95 |
-
- `judgment_text`: The text used for determining the judgment label
|
96 |
-
- `judgment_label`: **Primary suggested label**. Labels that can be used to train a model for judgment prediction:
|
97 |
-
- `no`: The appeal was denied
|
98 |
-
- `partial`: For partially favourable decisions
|
99 |
-
- `yes`: For fully favourable decisions
|
100 |
-
- removed labels (present in the original dataset):
|
101 |
-
- `conflito-competencia`: Meta-decision. For example, a decision just to tell that Court A should rule this case
|
102 |
-
and not Court B.
|
103 |
-
- `not-cognized`: The appeal was not accepted to be judged by the court
|
104 |
-
- `prejudicada`: The case could not be judged for any impediment such as the appealer died or gave up on the
|
105 |
-
case for instance.
|
106 |
-
- `unanimity_text`: Portuguese text to describe whether the decision was unanimous or not.
|
107 |
-
- `unanimity_label`: **Secondary suggested label**. Unified labels to describe whether the decision was unanimous or
|
108 |
-
not (in some cases contains ```not_determined```); they can be used for model training as well (Lage-Freitas et al.,
|
109 |
-
2019).
|
110 |
-
|
111 |
-
### Data Splits
|
112 |
-
|
113 |
-
The data has been split randomly into 80% train (3234), 10% validation (404), 10% test (405).
|
114 |
-
|
115 |
-
There are two tasks possible for this dataset.
|
116 |
-
|
117 |
-
#### Judgment
|
118 |
-
Label Distribution
|
119 |
-
|
120 |
-
| judgment | train | validation | test |
|
121 |
-
|:----------|---------:|-----------:|--------:|
|
122 |
-
| no | 1960 | 221 | 234 |
|
123 |
-
| partial | 677 | 96 | 93 |
|
124 |
-
| yes | 597 | 87 | 78 |
|
125 |
-
| **total** | **3234** | **404** | **405** |
|
126 |
-
|
127 |
-
#### Unanimity
|
128 |
-
|
129 |
-
In this configuration, all cases that have `not_determined` as `unanimity_label` can be removed.
|
130 |
-
|
131 |
-
Label Distribution
|
132 |
-
|
133 |
-
| unanimity_label | train | validation | test |
|
134 |
-
|:-----------------|----------:|---------------:|---------:|
|
135 |
-
| not_determined | 1519 | 193 | 201 |
|
136 |
-
| unanimity | 1681 | 205 | 200 |
|
137 |
-
| not-unanimity | 34 | 6 | 4 |
|
138 |
-
| **total** | **3234** | **404** | **405** |
|
139 |
-
|
140 |
-
## Dataset Creation
|
141 |
-
|
142 |
-
### Curation Rationale
|
143 |
-
|
144 |
-
This dataset was created to further the research on developing models for predicting Brazilian court decisions that are
|
145 |
-
also able to predict whether the decision will be unanimous.
|
146 |
-
|
147 |
-
### Source Data
|
148 |
-
|
149 |
-
The data was scraped from *Tribunal de Justiça de Alagoas* (TJAL, the State Supreme Court of Alagoas (Brazil).
|
150 |
-
|
151 |
-
#### Initial Data Collection and Normalization
|
152 |
-
|
153 |
-
*“We developed a Web scraper for collecting data from Brazilian courts. The scraper first searched for the URL that
|
154 |
-
contains the list of court cases […]. Then, the scraper extracted from these HTML files the specific case URLs and
|
155 |
-
downloaded their data […]. Next, it extracted the metadata and the contents of legal cases and stored them in a CSV file
|
156 |
-
format […].”* (Lage-Freitas et al., 2022)
|
157 |
-
|
158 |
-
#### Who are the source language producers?
|
159 |
-
|
160 |
-
The source language producer are presumably attorneys, judges, and other legal professionals.
|
161 |
-
|
162 |
-
### Annotations
|
163 |
-
|
164 |
-
#### Annotation process
|
165 |
-
|
166 |
-
The dataset was not annotated.
|
167 |
-
|
168 |
-
#### Who are the annotators?
|
169 |
-
|
170 |
-
[More Information Needed]
|
171 |
-
|
172 |
-
### Personal and Sensitive Information
|
173 |
-
|
174 |
-
The court decisions might contain sensitive information about individuals.
|
175 |
-
|
176 |
-
## Considerations for Using the Data
|
177 |
-
|
178 |
-
### Social Impact of Dataset
|
179 |
-
|
180 |
-
[More Information Needed]
|
181 |
-
|
182 |
-
### Discussion of Biases
|
183 |
-
|
184 |
-
[More Information Needed]
|
185 |
-
|
186 |
-
### Other Known Limitations
|
187 |
-
|
188 |
-
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton
|
189 |
-
Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset
|
190 |
-
consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the
|
191 |
-
dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that,
|
192 |
-
differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to
|
193 |
-
have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the
|
194 |
-
original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to
|
195 |
-
the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
|
196 |
-
|
197 |
-
## Additional Information
|
198 |
-
|
199 |
-
Lage-Freitas, A., Allende-Cid, H., Santana Jr, O., & Oliveira-Lage, L. (2019). Predicting Brazilian court decisions:
|
200 |
-
|
201 |
-
- "In Brazil [...] lower court judges decisions might be appealed to Brazilian courts (*Tribiunais de Justiça*) to be
|
202 |
-
reviewed by second instance court judges. In an appellate court, judges decide together upon a case and their
|
203 |
-
decisions are compiled in Agreement reports named *Acóordãos*."
|
204 |
-
|
205 |
-
### Dataset Curators
|
206 |
-
|
207 |
-
The names of the original dataset curators and creators can be found in references given below, in the section *Citation
|
208 |
-
Information*. Additional changes were made by Joel Niklaus ([Email](mailto:[email protected])
|
209 |
-
; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:[email protected])
|
210 |
-
; [Github](https://github.com/kapllan)).
|
211 |
-
|
212 |
-
### Licensing Information
|
213 |
-
|
214 |
-
No licensing information was provided for this dataset. However, please make sure that you use the dataset according to
|
215 |
-
Brazilian law.
|
216 |
-
|
217 |
-
### Citation Information
|
218 |
-
|
219 |
-
```
|
220 |
-
@misc{https://doi.org/10.48550/arxiv.1905.10348,
|
221 |
-
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and de Oliveira-Lage, L{\'{i}}via},
|
222 |
-
doi = {10.48550/ARXIV.1905.10348},
|
223 |
-
keywords = {Computation and Language (cs.CL),FOS: Computer and information sciences,Social and Information Networks (cs.SI)},
|
224 |
-
publisher = {arXiv},
|
225 |
-
title = {{Predicting Brazilian court decisions}},
|
226 |
-
url = {https://arxiv.org/abs/1905.10348},
|
227 |
-
year = {2019}
|
228 |
-
}
|
229 |
-
```
|
230 |
-
|
231 |
-
```
|
232 |
-
@article{Lage-Freitas2022,
|
233 |
-
author = {Lage-Freitas, Andr{\'{e}} and Allende-Cid, H{\'{e}}ctor and Santana, Orivaldo and Oliveira-Lage, L{\'{i}}via},
|
234 |
-
doi = {10.7717/peerj-cs.904},
|
235 |
-
issn = {2376-5992},
|
236 |
-
journal = {PeerJ. Computer science},
|
237 |
-
keywords = {Artificial intelligence,Jurimetrics,Law,Legal,Legal NLP,Legal informatics,Legal outcome forecast,Litigation prediction,Machine learning,NLP,Portuguese,Predictive algorithms,judgement prediction},
|
238 |
-
language = {eng},
|
239 |
-
month = {mar},
|
240 |
-
pages = {e904--e904},
|
241 |
-
publisher = {PeerJ Inc.},
|
242 |
-
title = {{Predicting Brazilian Court Decisions}},
|
243 |
-
url = {https://pubmed.ncbi.nlm.nih.gov/35494851 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/},
|
244 |
-
volume = {8},
|
245 |
-
year = {2022}
|
246 |
-
}
|
247 |
-
```
|
248 |
-
|
249 |
-
### Contributions
|
250 |
-
|
251 |
-
Thanks to [@kapllan](https://github.com/kapllan) and [@joelniklaus](https://github.com/joelniklaus) for adding this
|
252 |
-
dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
convert_to_hf_dataset.py
DELETED
@@ -1,132 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
import numpy as np
|
4 |
-
import pandas as pd
|
5 |
-
|
6 |
-
"""
|
7 |
-
Dataset url: https://github.com/lagefreitas/predicting-brazilian-court-decisions/blob/main/dataset.zip
|
8 |
-
Paper url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9044329/
|
9 |
-
|
10 |
-
There are no splits available ==> Make random split ourselves
|
11 |
-
|
12 |
-
"""
|
13 |
-
|
14 |
-
pd.set_option('display.max_colwidth', None)
|
15 |
-
pd.set_option('display.max_columns', None)
|
16 |
-
|
17 |
-
|
18 |
-
def perform_original_preprocessing():
|
19 |
-
# Original Preprocessing from: https://github.com/lagefreitas/predicting-brazilian-court-decisions/blob/main/predicting-brazilian-court-decisions.py#L81
|
20 |
-
# Loading the labeled decisions
|
21 |
-
data = pd.read_csv("dataset.csv", sep='<=>', header=0)
|
22 |
-
print('data.shape=' + str(data.shape) + ' full data set')
|
23 |
-
# Removing NA values
|
24 |
-
data = data.dropna(subset=[data.columns[9]]) # decision_description
|
25 |
-
data = data.dropna(subset=[data.columns[11]]) # decision_label
|
26 |
-
print('data.shape=' + str(data.shape) + ' dropna')
|
27 |
-
# Removing duplicated samples
|
28 |
-
data = data.drop_duplicates(subset=[data.columns[1]]) # process_number
|
29 |
-
print('data.shape=' + str(data.shape) + ' removed duplicated samples by process_number')
|
30 |
-
data = data.drop_duplicates(subset=[data.columns[9]]) # decision_description
|
31 |
-
print('data.shape=' + str(data.shape) + ' removed duplicated samples by decision_description')
|
32 |
-
# Removing not relevant decision labels and decision not properly labeled
|
33 |
-
data = data.query('decision_label != "conflito-competencia"')
|
34 |
-
print('data.shape=' + str(data.shape) + ' removed decisions labeled as conflito-competencia')
|
35 |
-
data = data.query('decision_label != "prejudicada"')
|
36 |
-
print('data.shape=' + str(data.shape) + ' removed decisions labeled as prejudicada')
|
37 |
-
data = data.query('decision_label != "not-cognized"')
|
38 |
-
print('data.shape=' + str(data.shape) + ' removed decisions labeled as not-cognized')
|
39 |
-
data_no = data.query('decision_label == "no"')
|
40 |
-
print('data_no.shape=' + str(data_no.shape))
|
41 |
-
data_yes = data.query('decision_label == "yes"')
|
42 |
-
print('data_yes.shape=' + str(data_yes.shape))
|
43 |
-
data_partial = data.query('decision_label == "partial"')
|
44 |
-
print('data_partial.shape=' + str(data_partial.shape))
|
45 |
-
# Merging decisions whose labels are yes, no, and partial to build the final data set
|
46 |
-
data_merged = data_no.merge(data_yes, how='outer')
|
47 |
-
data = data_merged.merge(data_partial, how='outer')
|
48 |
-
print('data.shape=' + str(data.shape) + ' merged decisions whose labels are yes, no, and partial')
|
49 |
-
# Removing decision_description and decision_labels whose values are -1 and -2
|
50 |
-
indexNames = data[(data['decision_description'] == str(-1)) | (data['decision_description'] == str(-2)) | (
|
51 |
-
data['decision_label'] == str(-1)) | (data['decision_label'] == str(-2))].index
|
52 |
-
data.drop(indexNames, inplace=True)
|
53 |
-
print('data.shape=' + str(data.shape) + ' removed -1 and -2 decision descriptions and labels')
|
54 |
-
|
55 |
-
data.to_csv("dataset_processed_original.csv", index=False)
|
56 |
-
|
57 |
-
|
58 |
-
def perform_additional_processing():
|
59 |
-
df = pd.read_csv("dataset_processed_original.csv")
|
60 |
-
|
61 |
-
# remove strange " characters sometimes occurring in the beginning and at the end of a line
|
62 |
-
df.ementa_filepath = df.ementa_filepath.str.replace('^"', '')
|
63 |
-
df.decision_unanimity = df.decision_unanimity.str.replace('"$', '')
|
64 |
-
|
65 |
-
# removing process_type and judgment_date, since they are the same everywhere (-)
|
66 |
-
# decisions only contains 'None', nan and '-2'
|
67 |
-
# ementa_filepath refers to the name of file in the filesystem that we created when we scraped the data from the Court. It is temporary data and can be removed
|
68 |
-
# decision_description = ementa_text - decision_text - decision_unanimity_text
|
69 |
-
df = df.drop(['process_type', 'judgment_date', 'decisions', 'ementa_filepath'], axis=1)
|
70 |
-
|
71 |
-
# some rows are somehow not read correctly. With this, we can filter them
|
72 |
-
df = df[df.decision_text.str.len() > 1]
|
73 |
-
|
74 |
-
# rename "-2" to more descriptive name ==> -2 means, that they were not able to determine it
|
75 |
-
df.decision_unanimity = df.decision_unanimity.replace('-2', 'not_determined')
|
76 |
-
|
77 |
-
# rename cols for more clarity
|
78 |
-
df = df.rename(columns={"decision_unanimity": "unanimity_label"})
|
79 |
-
df = df.rename(columns={"decision_unanimity_text": "unanimity_text"})
|
80 |
-
df = df.rename(columns={"decision_text": "judgment_text"})
|
81 |
-
df = df.rename(columns={"decision_label": "judgment_label"})
|
82 |
-
|
83 |
-
df.to_csv("dataset_processed_additional.csv", index=False)
|
84 |
-
|
85 |
-
return df
|
86 |
-
|
87 |
-
|
88 |
-
perform_original_preprocessing()
|
89 |
-
df = perform_additional_processing()
|
90 |
-
|
91 |
-
# perform random split 80% train (3234), 10% validation (404), 10% test (405)
|
92 |
-
train, validation, test = np.split(df.sample(frac=1, random_state=42), [int(.8 * len(df)), int(.9 * len(df))])
|
93 |
-
|
94 |
-
|
95 |
-
def save_splits_to_jsonl(config_name):
|
96 |
-
# save to jsonl files for huggingface
|
97 |
-
if config_name: os.makedirs(config_name, exist_ok=True)
|
98 |
-
train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
|
99 |
-
validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
|
100 |
-
test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
|
101 |
-
|
102 |
-
|
103 |
-
def print_split_table_single_label(train, validation, test, label_name):
|
104 |
-
train_counts = train[label_name].value_counts().to_frame().rename(columns={label_name: "train"})
|
105 |
-
validation_counts = validation[label_name].value_counts().to_frame().rename(columns={label_name: "validation"})
|
106 |
-
test_counts = test[label_name].value_counts().to_frame().rename(columns={label_name: "test"})
|
107 |
-
|
108 |
-
table = train_counts.join(validation_counts)
|
109 |
-
table = table.join(test_counts)
|
110 |
-
table[label_name] = table.index
|
111 |
-
total_row = {label_name: "total",
|
112 |
-
"train": len(train.index),
|
113 |
-
"validation": len(validation.index),
|
114 |
-
"test": len(test.index)}
|
115 |
-
table = table.append(total_row, ignore_index=True)
|
116 |
-
table = table[[label_name, "train", "validation", "test"]] # reorder columns
|
117 |
-
print(table.to_markdown(index=False))
|
118 |
-
|
119 |
-
|
120 |
-
save_splits_to_jsonl("")
|
121 |
-
|
122 |
-
print_split_table_single_label(train, validation, test, "judgment_label")
|
123 |
-
print_split_table_single_label(train, validation, test, "unanimity_label")
|
124 |
-
|
125 |
-
# create second config by filtering out rows with unanimity label == not_determined, while keeping the same splits
|
126 |
-
# train = train[train.unanimity_label != "not_determined"]
|
127 |
-
# validation = validation[validation.unanimity_label != "not_determined"]
|
128 |
-
# test = test[test.unanimity_label != "not_determined"]
|
129 |
-
|
130 |
-
|
131 |
-
# it is a very small dataset and very imbalanced (only very few not-unanimity labels)
|
132 |
-
# save_splits_to_jsonl("unanimity")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
joelito--brazilian_court_decisions/json-test.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:032a397b906e1239a490a4366e680076273635376dfac290ad48fc04470df726
|
3 |
+
size 412536
|
joelito--brazilian_court_decisions/json-train.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7c90b12e13f429d3c9bfd0654222f7058f1fa8b0da39ddf44770266394827138
|
3 |
+
size 3245400
|
joelito--brazilian_court_decisions/json-validation.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1853e6118499b34684da635d60214d1228f62cf8cd6f18abf966382e42706b82
|
3 |
+
size 425023
|
test.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
train.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|
validation.jsonl
DELETED
The diff for this file is too large to render.
See raw diff
|
|