--- license: mit language: - en metrics: - bleu - rouge - meteor pipeline_tag: text2text-generation widget: - text: >- name: Bug report\nabout: Create a report to help us improve\ntitle: <|EMPTY|>\nlabels: <|EMPTY|>\nassignees: <|EMPTY|>\nheadlines_type: <|MASK|>\nheadlines: <|MASK|>\nsummary: This issue report aims to describe a bug encountered while using the software. It includes a clear and concise description of the issue, steps to reproduce the behavior, expected behavior, screenshots (if applicable), and relevant versions of the operating system, IIS, Django, and Python. Additional context may also be provided to provide further details about the problem. example_title: Example 1 datasets: - nafisehNik/GIRT-Instruct --- # GIRT-Model paper: https://arxiv.org/abs/2402.02632 demo: https://huggingface.co/spaces/nafisehNik/girt-space This model is fine-tuned to generate issue report templates based on the input instruction provided. It has been fine-tuned on [GIRT-Instruct](https://huggingface.co/datasets/nafisehNik/GIRT-Instruct) data. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # load model and tokenizer model = AutoModelForSeq2SeqLM.from_pretrained('nafisehNik/girt-t5-base') tokenizer = AutoTokenizer.from_pretrained(nafisehNik/girt-t5-base) # method for computing issue report template generation def compute(sample, top_p, top_k, do_sample, max_length, min_length): inputs = tokenizer(sample, return_tensors="pt").to('cpu') outputs = model.generate( **inputs, min_length= min_length, max_length=max_length, do_sample=do_sample, top_p=top_p, top_k=top_k).to('cpu') generated_texts = tokenizer.batch_decode(outputs, skip_special_tokens=False) generated_text = generated_texts[0] replace_dict = { '\n ': '\n', '': '', ' ': '', '': '', '!--': '