File size: 1,142 Bytes
fdc913e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
<!-- Performs sentence classification to determine whether a given sentence is a contribution sentence or not from the research paper-->
Performs sentence classification to determine whether a given sentence is a contribution sentence or not from the research paper
## Model Details
### Model Description
- **Model type:** text-classification
- **Language(s) (NLP):** EN
- **Finetuned from model:** allenai/scibert_scivocab_uncased
### How to Get Started with the Model
Use the code below to get started with the model.
```bash
from transformers import pipeline
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("Goutham-Vignesh/ContributionSentClassification-scibert")
tokenizer=BertTokenizer.from_pretrained('Goutham-Vignesh/ContributionSentClassification-scibert')
text_classification = pipeline('text-classification', model=model, tokenizer=tokenizer)
```
|