--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID Performs sentence classification to determine whether a given sentence is a contribution sentence or not from the research paper ## Model Details ### Model Description - **Model type:** text-classification - **Language(s) (NLP):** EN - **Finetuned from model:** allenai/scibert_scivocab_uncased ### How to Get Started with the Model Use the code below to get started with the model. ```bash from transformers import pipeline from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("Goutham-Vignesh/ContributionSentClassification-scibert") tokenizer=BertTokenizer.from_pretrained('Goutham-Vignesh/ContributionSentClassification-scibert') text_classification = pipeline('text-classification', model=model, tokenizer=tokenizer) ```