Datasets:

ArXiv:
License:
kiddothe2b commited on
Commit
b4013a9
1 Parent(s): 63943fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -254,13 +254,13 @@ An example of 'train' looks as follows.
254
  <tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Language</td><td>Task Type</td><td>Classes</td><tr>
255
  <tr><td>ECtHR</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>en</td><td>Multi-label classification</td><td>10+1</td></tr>
256
  <tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>en</td><td>Multi-class classification</td><td>14</td></tr>
257
- <tr><td>FSCS</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>Swiss Law</td><td>en, fr , it</td><td>Binary classification</td><td>2</td></tr>
258
  <tr><td>CAIL</td><td> <a href="https://arxiv.org/abs/2105.03887">Wang et al. (2021)</a></td><td>Chinese Law</td><td>zh</td><td>Multi-class classification</td><td>6</td></tr>
259
  </table>
260
 
261
  #### Initial Data Collection and Normalization
262
 
263
- We standardize and put together four datasets: ECtHR (Chalkidis et al., 2021), SCOTUS (Spaeth et al., 2020), FSCS (Niklaus et al., 2021), and CAIL (Xiao et al., 2018; Wang et al., 2021) that are already publicly available.
264
 
265
  The benchmark is not a blind stapling of pre-existing resources, we augment previous datasets. In the case of ECtHR, previously unavailable demographic attributes have been released to make the original dataset amenable for fairness research. For SCOTUS, two resources (court opinions with SCDB) have been combined for the very same reason, while the authors provide a manual categorization (clustering) of respondents.
266
 
 
254
  <tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Language</td><td>Task Type</td><td>Classes</td><tr>
255
  <tr><td>ECtHR</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>en</td><td>Multi-label classification</td><td>10+1</td></tr>
256
  <tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>en</td><td>Multi-class classification</td><td>14</td></tr>
257
+ <tr><td>FSCS</td><td> <a href="https://aclanthology.org/2021.nllp-1.3/">Niklaus et al. (2021)</a></td><td>Swiss Law</td><td>en, fr , it</td><td>Binary classification</td><td>2</td></tr>
258
  <tr><td>CAIL</td><td> <a href="https://arxiv.org/abs/2105.03887">Wang et al. (2021)</a></td><td>Chinese Law</td><td>zh</td><td>Multi-class classification</td><td>6</td></tr>
259
  </table>
260
 
261
  #### Initial Data Collection and Normalization
262
 
263
+ We standardize and put together four datasets: ECtHR (Chalkidis et al., 2021), SCOTUS (Spaeth et al., 2020), FSCS (Niklaus et al., 2021), and CAIL (Xiao et al., 2018; Wang et al., 2021) that are already publicly available.
264
 
265
  The benchmark is not a blind stapling of pre-existing resources, we augment previous datasets. In the case of ECtHR, previously unavailable demographic attributes have been released to make the original dataset amenable for fairness research. For SCOTUS, two resources (court opinions with SCDB) have been combined for the very same reason, while the authors provide a manual categorization (clustering) of respondents.
266