Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Lawformer
|
2 |
+
|
3 |
+
### Introduction
|
4 |
+
This repository provides the source code and checkpoints of the paper "Lawformer: A Pre-trained Language Model forChinese Legal Long Documents". You can download the checkpoint from the [huggingface model hub](https://huggingface.co/xcjthu/Lawformer) or from [here](https://data.thunlp.org/legal/Lawformer.zip).
|
5 |
+
|
6 |
+
|
7 |
+
|
8 |
+
### Easy Start
|
9 |
+
We have uploaded our model to the huggingface model hub. Make sure you have installed transformers.
|
10 |
+
```python
|
11 |
+
>>> from transformers import AutoModel, AutoTokenizer
|
12 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
|
13 |
+
>>> model = AutoModel.from_pretrained("xcjthu/Lawformer")
|
14 |
+
>>> inputs = tokenizer("任某提起诉讼,请求判令解除婚姻关系并对夫妻共同财产进行分割。", return_tensors="pt")
|
15 |
+
>>> outputs = model(**inputs)
|
16 |
+
```
|
17 |
+
|
18 |
+
### Cite
|
19 |
+
If you use the pre-trained models, please cite this paper:
|
20 |
+
```
|
21 |
+
@article{xiao2021lawformer,
|
22 |
+
title={Lawformer: A Pre-trained Language Model forChinese Legal Long Documents},
|
23 |
+
author={Xiao, Chaojun and Hu, Xueyu and Liu, Zhiyuan and Tu, Cunchao and Sun, Maosong},
|
24 |
+
year={2021}
|
25 |
+
}
|
26 |
+
```
|
27 |
+
|