mwz commited on
Commit
f10a05f
1 Parent(s): eae4fe4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - mwz/ur_para
5
+ language:
6
+ - ur
7
+ tags:
8
+ - 'paraphrase '
9
  ---
10
+ # Urdu Paraphrasing Model
11
+
12
+ This repository contains a pretrained model for Urdu paraphrasing. The model is based on the BERT architecture and has been fine-tuned on a large dataset of Urdu paraphrases.
13
+
14
+ ## Model Description
15
+
16
+ The pretrained model is based on the BERT architecture, specifically designed for paraphrasing tasks in the Urdu language. It has been trained using a large corpus of Urdu text to generate high-quality paraphrases.
17
+
18
+ ## Model Details
19
+
20
+ - Model Name: Urdu-Paraphrasing-BERT
21
+ - Base Model: BERT
22
+ - Architecture: Transformer
23
+ - Language: Urdu
24
+ - Dataset: Urdu Paraphrasing Dataset mwz/ur_para
25
+
26
+ ## How to Use
27
+
28
+ You can use this pretrained model for generating paraphrases for Urdu text. Here's an example of how to use the model:
29
+
30
+ ```python
31
+ from transformers import pipeline
32
+
33
+ # Load the model
34
+ model = pipeline("text2text-generation", model="path_to_pretrained_model")
35
+
36
+ # Generate paraphrases
37
+ input_text = "Urdu input text for paraphrasing."
38
+ paraphrases = model(input_text, max_length=128, num_return_sequences=3)
39
+
40
+ # Print the generated paraphrases
41
+ print("Original Input Text:", input_text)
42
+ print("Generated Paraphrases:")
43
+ for paraphrase in paraphrases:
44
+ print(paraphrase["generated_text"])
45
+ ```
46
+ ## Training
47
+ The model was trained using the Hugging Face transformers library. The training process involved fine-tuning the base BERT model on the Urdu Paraphrasing Dataset.
48
+
49
+ ## Evaluation
50
+ The model's performance was evaluated on a separate validation set using metrics such as BLEU, ROUGE, and perplexity. However, please note that the evaluation results may vary depending on the specific use case.
51
+
52
+ ## Acknowledgments
53
+ - The pretrained model is based on the BERT architecture developed by Google Research.
54
+
55
+ ## License
56
+ This model and the associated code are licensed under the MIT License.