File size: 3,152 Bytes
67f66e2
 
e1cd49f
67f66e2
7e0569f
35af10a
 
67f66e2
 
35af10a
 
b153661
 
 
27a1fdc
f5a8650
 
 
7e0569f
 
b9eb7e2
 
7e0569f
 
67f66e2
 
 
 
 
 
 
937aabb
 
 
 
 
 
 
 
67f66e2
db46eff
67f66e2
 
 
 
 
7e0569f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: cc-by-nc-nd-4.0
pipeline_tag: text-classification
tags:
- deep learning
- law article retrieval
- natural language processing
- BERT
- information retrieval
- legal ai
- italian civil code
language:
- it
library_name: transformers
widget:
- text: Quando si apre la successione?
datasets:
- AndreaSimeri/Italian_Civil_Code
---

*DISCLAIMER: This model is trained on a subset of the dataset. In particular, it is trained on th first 60 articles of the Italian Civil Code's book 2.

### Abstract
Modeling law search and retrieval as prediction problems has recently emerged as a predominant approach in law intelligence. Focusing on the law article retrieval task, we present a deep learning framework named LamBERTa, which is designed for civil-law codes, and specifically trained on the Italian civil code. To our knowledge, this is the first study proposing an advanced approach to law article prediction for the Italian legal system based on a BERT (Bidirectional Encoder Representations from Transformers) learning framework, which has recently attracted increased attention among deep learning approaches, showing outstanding effectiveness in several natural language processing and learning tasks. We define LamBERTa models by fine-tuning an Italian pre-trained BERT on the Italian civil code or its portions, for law article retrieval as a classification task. One key aspect of our LamBERTa framework is that we conceived it to address an extreme classification scenario, which is characterized by a high number of classes, the few-shot learning problem, and the lack of test query benchmarks for Italian legal prediction tasks. To solve such issues, we define different methods for the unsupervised labeling of the law articles, which can in principle be applied to any law article code system. We provide insights into the explainability and interpretability of our LamBERTa models, and we present an extensive experimental analysis over query sets of different type, for single-label as well as multi-label evaluation tasks. Empirical evidence has shown the effectiveness of LamBERTa, and also its superiority against widely used deep-learning text classifiers and a few-shot learner conceived for an attribute-aware prediction task.

### LamBERTa: A Deep Learning Framework for Law Article Retrieval

![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62867cb4504d3770030ae173/Bn1qvPxZVLmM7tdyCYWzi.webp)

### BibTeX Entry and Citation Info
```
@article{Lamberta,
  author    = {Andrea Tagarelli and Andrea Simeri},
  title     = {{Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code}},
  journal   = {Artif. Intell. Law},
  volume    = {30(3)}, 
  pages     = {417--473. Published: 15 September 2021},
  year      = {2022}, 
  doi ={10.1007/s10506-021-09301-8}
}

```

### References
- Tagarelli, A., Simeri, A. Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code. Artif Intell Law 30, 417–473 (2022). https://doi.org/10.1007/s10506-021-09301-8

---