Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -55,7 +55,7 @@ with gr.Blocks() as demo:
|
|
55 |
gr.Markdown("""
|
56 |
<h3>Abstract</h3>
|
57 |
<p>
|
58 |
-
With the development of large-scale language model technology,
|
59 |
</p>
|
60 |
<h3>摘要</h3>
|
61 |
<p>
|
|
|
55 |
gr.Markdown("""
|
56 |
<h3>Abstract</h3>
|
57 |
<p>
|
58 |
+
With the development of large-scale language model technology, fine-tuning pre-trained large-scale language models to solve downstream natural language processing tasks has become a mainstream paradigm. However, training a language model in the legal domain requires a large number of legal documents so that the language model can learn legal terms and the particularity of the format of legal documents. Therefore, it usually needs to rely on many manual annotation data sets for training. In the legal domain, obtaining a large amount of manually labeled data sets is practically difficult, which limits the application of traditional NLP methods in drafting legal documents. The experimental results of this paper show that it is feasible to fine-tune a large pre-trained language model on a local computer with a large number of annotation-free legal documents can not only significantly improve the performance of the fine-tuned model on the legal document drafting task but also provide a basis for automatic legal document drafting. Moreover, it offers new ideas and approaches and, at the same time, protects information privacy and reduces information security issues.
|
59 |
</p>
|
60 |
<h3>摘要</h3>
|
61 |
<p>
|