Update README.md
Browse files
README.md
CHANGED
@@ -22,5 +22,10 @@ The raw text corpus size is around 27 GB.
|
|
22 |
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/), and trained for an additional 300K steps on our data on the MLM and NSP objective.
|
23 |
|
24 |
### Usage
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
### Citation
|
|
|
22 |
This model is initialized with the [LEGAL-BERT-SC model](https://huggingface.co/nlpaueb/legal-bert-base-uncased) from the paper [LEGAL-BERT: The Muppets straight out of Law School](https://aclanthology.org/2020.findings-emnlp.261/), and trained for an additional 300K steps on our data on the MLM and NSP objective.
|
23 |
|
24 |
### Usage
|
25 |
+
```python
|
26 |
+
from transformers import AutoTokenizer, AutoModel, BertForPreTraining
|
27 |
+
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/legal-bert-base-uncased")
|
28 |
+
model = AutoModel.from_pretrained("nlpaueb/legal-bert-base-uncased")
|
29 |
+
```
|
30 |
|
31 |
### Citation
|