--- license: apache-2.0 datasets: - jfleg widget: - text: "fix grammar: I am work with machine to write gooder english." example_title: example --- This is my first model for grammar error correction. It uses the jfleg dataset and is built on `t5-base`. It is trained only for 3 epochs so the output isn't that great. ## Usage You can use this model with the standard transformers library. This model should be small enough to run on the CPU. ``` $ pip install transformers torch sentencepiece ``` Once you have the dependencies setup, you should be able to run this model. ``` from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = 'vagmi/grammar-t5' model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) text = 'fix grammar: I am work with machine to write gooder english.' inputs = tokenizer(text, return_tensors='pt') outputs = model.generate(inputs['input_ids'], num_beams=2, max_length=512, early_stopping=True) fixed = tokenizer.decode(outputs[0], skip_special_tokens=True) # I am working with machine to write better english. ```