|
--- |
|
language: en |
|
tags: |
|
- aspect-term-sentiment-analysis |
|
- pytorch |
|
- ATSA |
|
datasets: |
|
- semeval2014 |
|
widget: |
|
- text: "[CLS] The appearance is very nice, but the battery life is poor. [SEP] appearance [SEP] " |
|
--- |
|
|
|
# Note |
|
|
|
`Aspect term sentiment analysis` |
|
|
|
BERT LSTM based baseline, based on https://github.com/avinashsai/BERT-Aspect *BERT LSTM* implementation.The model trained on SemEval2014-Task 4 laptop and restaurant datasets. |
|
|
|
Our Github repo: https://github.com/tezignlab/BERT-LSTM-based-ABSA |
|
|
|
Code for the paper "Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference" https://arxiv.org/pdf/2002.04815.pdf. |
|
|
|
# Usage |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline |
|
|
|
MODEL = "tezign/BERT-LSTM-based-ABSA" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(MODEL) |
|
|
|
model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True) |
|
|
|
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer) |
|
|
|
result = classifier([ |
|
{"text": "The appearance is very nice, but the battery life is poor", "text_pair": "appearance"}, |
|
{"text": "The appearance is very nice, but the battery life is poor", "text_pair": "battery"} |
|
], |
|
function_to_apply="softmax") |
|
|
|
print(result) |
|
|
|
""" |
|
print result |
|
>> [{'label': 'positive', 'score': 0.9129462838172913}, {'label': 'negative', 'score': 0.8834680914878845}] |
|
""" |
|
|
|
``` |