Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- cmu_hinglish_dog
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: keras
|
8 |
+
pipeline_tag: translation
|
9 |
+
tags:
|
10 |
+
- hinglish
|
11 |
+
- translation
|
12 |
+
- hinglish to english
|
13 |
+
- language translation
|
14 |
+
- keras
|
15 |
+
- keras nlp
|
16 |
+
- nlp
|
17 |
+
- transformers
|
18 |
+
- gemma
|
19 |
+
- gemma2b
|
20 |
+
---
|
21 |
+
# Project Hinglish - A Hinglish to English Language Translater.
|
22 |
+
|
23 |
+
Project Hinglish aims to develop a high-performance language translation model capable of translating Hinglish (a blend of Hindi and English commonly used in informal communication in India) to standard English.
|
24 |
+
The model is fine-tuned over gemma-2b using PEFT(LoRA) method using the rank 128. Aim of this model is for handling the unique syntactical and lexical characteristics of Hinglish.
|
25 |
+
|
26 |
+
# Fine-Tune Method:
|
27 |
+
|
28 |
+
- **Fine-Tuning Approach Using PEFT (LoRA):** The fine-tuning employs Parameter-efficient Fine Tuning (PEFT) techniques, particularly using LoRA (Low-Rank Adaptation). LoRA modifies a pre-trained model efficiently by introducing low-rank matrices that adapt the model’s attention and feed-forward layers. This method allows significant model adaptation with minimal updates to the parameters, preserving the original model's strengths while adapting it effectively to the nuances of Hinglish.
|
29 |
+
- **Dataset:** cmu_hinglish_dog + Combination of sentences taken from my own dialy life chats with friends and Uber Messages.
|
30 |
+
|
31 |
+
# Example Output
|
32 |
+
|
33 |
+
![Example IO](io1.png)
|
34 |
+
|
35 |
+
# Usage
|
36 |
+
``` python
|
37 |
+
# Load model directly
|
38 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
39 |
+
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained("rudrashah/RLM-hinglish-translator")
|
41 |
+
model = AutoModelForCausalLM.from_pretrained("rudrashah/RLM-hinglish-translator")
|
42 |
+
```
|