Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ We further train this model on our data for 300K steps on the Masked Language Mo
|
|
21 |
|
22 |
### Model Overview
|
23 |
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
|
24 |
-
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters
|
25 |
|
26 |
### Usage
|
27 |
Using the tokenizer (same as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased))
|
@@ -47,4 +47,14 @@ model = AutoModel.from_pretrained("law-ai/InLegalBERT")
|
|
47 |
Ghosh, Saptarshi",
|
48 |
eprinttype = {arXiv}
|
49 |
}
|
50 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
### Model Overview
|
23 |
This model has the same configuration as the [bert-base-uncased model](https://huggingface.co/bert-base-uncased):
|
24 |
+
12 hidden layers, 768 hidden dimensionality, 12 attention heads, ~110M parameters.
|
25 |
|
26 |
### Usage
|
27 |
Using the tokenizer (same as [LegalBERT](https://huggingface.co/nlpaueb/legal-bert-base-uncased))
|
|
|
47 |
Ghosh, Saptarshi",
|
48 |
eprinttype = {arXiv}
|
49 |
}
|
50 |
+
```
|
51 |
+
|
52 |
+
### About Us
|
53 |
+
We are a group of researchers from the Department of Computer Science and Technology, Indian Insitute of Technology, Kharagpur.
|
54 |
+
Our research interests are primarily ML and NLP applications for the legal domain, with a special focus on the challenges and oppurtunites for the Indian legal scenario.
|
55 |
+
We have, and are currently working on several legal tasks such as:
|
56 |
+
* named entity recognition, summarization of legal documents
|
57 |
+
* semantic segmentation of legal documents
|
58 |
+
* legal statute identification from facts, court judgment prediction
|
59 |
+
* legal document matching
|
60 |
+
You can find our publicly available codes and datasets [here](https://github.com/Law-AI)
|