File size: 1,132 Bytes
535d26b
 
5c13d2a
 
88634bc
 
 
535d26b
5c13d2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
datasets:
- jfleg
widget:
  - text: "fix grammar: I am work with machine to write gooder english."
    example_title: example
---

This is my first model for grammar error correction. It uses the jfleg dataset and is built on `t5-base`. It is trained only for 3 epochs so the output isn't that great.

## Usage

You can use this model with the standard transformers library. This model should be small enough to run on the CPU.

```
$ pip install transformers torch sentencepiece
```


Once you have the dependencies setup, you should be able to run this model.


```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

model_name = 'vagmi/grammar-t5'

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

text = 'fix grammar: I am work with machine to write gooder english.'
inputs = tokenizer(text, return_tensors='pt')
outputs = model.generate(inputs['input_ids'], num_beams=2, max_length=512, early_stopping=True)
fixed = tokenizer.decode(outputs[0], skip_special_tokens=True)
# I am working with machine to write better english.
```