BERT-LAW / README.md
quanghuy123's picture
Update README.md
ed69fd1 verified
|
raw
history blame
1.91 kB
metadata
license: apache-2.0
language:
  - vi
metrics:
  - exact_match
  - f1
base_model:
  - google-bert/bert-base-multilingual-cased
pipeline_tag: question-answering
library_name: transformers
new_version: google-bert/bert-base-multilingual-cased
tags:
  - legal

BERT-Law: Information Extraction Model for Legal Texts

Model Description

BERT-Law is a fine-tuned version of BERT (Bidirectional Encoder Representations from Transformers), focusing on information extraction from legal documents. The model is specifically trained on a custom dataset called UTE_LAW, which consists of approximately 30,000 pairs of legal questions and related documents. The main goal of this model is to extract relevant information from legal text while reducing the costs associated with using third-party APIs.

Key Features

  • Base Model: The model is built on top of google-bert/bert-base-multilingual-cased, which is a pre-trained multilingual BERT model.
  • Fine-tuning: It has been fine-tuned with the UTE_LAW dataset, focusing on extracting relevant information from legal texts.
  • Model Type: BERT-based model for question-answering tasks.
  • Task: The model is optimized for information extraction tasks, specifically designed to handle legal documents.

Model Specifications

  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 512 tokens
  • Language: Primarily focused on Vietnamese legal texts.
  • License: Apache-2.0 License

Usage

This model is suitable for applications in legal domains, such as:

  • Legal document analysis: Extracting relevant information from legal texts.
  • Question answering: Providing answers to legal questions based on the content of legal documents.

The model aims to reduce reliance on third-party APIs, which can incur higher costs, by providing a locally deployable solution for legal document processing.