File size: 4,891 Bytes
e924eec
 
0d73fef
15155c0
0d73fef
e924eec
 
15155c0
e924eec
15155c0
8582e12
f916b33
065a8fd
f916b33
065a8fd
f916b33
065a8fd
f916b33
065a8fd
f916b33
065a8fd
15155c0
ec2da59
 
 
 
 
 
 
 
 
15155c0
 
 
 
 
 
a31d2d7
15155c0
bf4651c
15155c0
 
bf4651c
15155c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1d04e2
15155c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec2da59
15155c0
 
 
 
 
 
 
 
 
 
 
 
5c12781
15155c0
 
5c12781
15155c0
f40169f
bce9ec5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
datasets:
- google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction
- common-crawl

language:
- en

license: mit

widget:
- text: Patient has a I have a blinding headache
  example_title: Medical Example
- text: Do you even know why I always need changed our checking account number.
  example_title: Banking Example
- text: Ironman and Captain America is going out.
  example_title: General Example 1
- text: We all eat fish and then made dessert.
  example_title: General Example 2
- text: We have our Dinner yesterday.
  example_title: General Example 3

inference:
  parameters:
    max_length: 256
    num_beams: 3
    no_repeat_ngram_size: 2
    repetition_penalty: 2.5
    temperature: 0.7
    do_sample: True

tags:
- context-correction
- error-correction
---

# T5 Context Corrector (base-sized)
This model is a fine-tuned [T5 model](https://huggingface.co/t5-base) on the [Synthetic GEC](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction) dataset and filtered CommonCrawl data in English Language.
The Base Model(T5) is Pre-trained on C4(Colossal Clean Crawled Corpus) dataset and works with numerous downstrem tasks. <br>
Our Model specifically is fine-tuned on a single downstream task of context correction on the above mentioned two datasets.

## Model description
This Model has the same architecture as its base model, thus having 220 Million Parameters while consisting of 12 encoder blocks and 12 decoder blocks with an input embedding size of 32128. Please refer to this [link](https://arxiv.org/pdf/1910.10683.pdf) to know more about the model details.

## Intended Use & Limitations
As the model is intented to correct the context of the given sentence, all you have to do is pass the non-contextually correct sentence and get the corrected response back.<br>
Based on Multiple experiments performed as part of the training, we observe that the model works best when the the total number of tokens in the input is less than 256.<br>
So, if you have a long paragraph that needs to be context corrected, we suggest to first sentence tokenize the paragraph and run the context corrector for each sentence separately to obtain best results.

Note that the model is primarily trained on general publicly available corpus, so it maynot work well for Medical Contexts.

## Usage

You can use this model directly with a pipeline for Text to Text Generation:

```python
from transformers import pipeline


ctx_corr = pipeline("text2text-generation", model='DeathReaper0965/t5-context-corrector')
ctx_corr("Do you even know why I always need changed our checking account number")

###########OUTPUT###########
# [{'generated_text': 'Do you even know why I always need to change our checking account number?'}]
```

Or you can also use the model to get the features for a given text:

```python
from nltk import sent_tokenize

from transformers import T5ForConditionalGeneration, T5Tokenizer


# Load model and tokenizer
cc_tokenizer = T5Tokenizer.from_pretrained("DeathReaper0965/t5-context-corrector")
cc_model = T5ForConditionalGeneration.from_pretrained("DeathReaper0965/t5-context-corrector")

# Utility function to correct context
def correct_context(input_text, temperature=0.5):
    # tokenize
    batch = cc_tokenizer(input_text,
                         truncation=True,
                         padding='max_length',
                         max_length=256, 
                         return_tensors="pt")

    # forward pass
    results = cc_model.generate(**batch,
                                max_length=256,
                                num_beams=3,
                                no_repeat_ngram_size=2,
                                repetition_penalty=2.5,
                                temperature=temperature,
                                do_sample=True)
    
    return results

# Utility function to split the paragraph into multiple sentences
def split_and_correct_context(sent):
    sents = sent_tokenize(sent)
    
    final_sents = cc_tokenizer.batch_decode(correct_context(sents), 
                                            clean_up_tokenization_spaces=True, 
                                            skip_special_tokens=True)
    
    final_sents = " ".join([final_sents[i].strip() for i in range(len(final_sents))])
    
    return final_sents


split_and_correct_context("Do you even know why I always need changed our checking account number. Because of the securty purpos.")

###########OUTPUT###########
# 'Do you even know why I always need to change our checking account number? Because of the security purpose.'
```

> Designed and Developed with <span style="color: #e25555;">&hearts;</span> by [Praneet](https://deathreaper0965.github.io/) | [LinkedIn](http://linkedin.com/in/deathreaper0965) | [GitHub](https://github.com/DeathReaper0965/)