Lowerated/deberta-v3-lm6
Model Details
Model Name: Lowerated/deberta-v3-lm6
Model Type: Text Classification (Aspect-Based Sentiment Analysis)
Language: English
Framework: PyTorch
License: Apache 2.0
Model Description
Lowerated/deberta-v3-lm6 is a DeBERTa-v3-based model fine-tuned for aspect-based sentiment analysis on IMDb movie reviews. The model is designed to classify sentiments across seven key aspects of filmmaking: Cinematography, Direction, Story, Characters, Production Design, Unique Concept, and Emotions.
Dataset
Dataset Name: Lowerated/imdb-reviews-rated
Dataset URL: IMDb Reviews Rated
Dataset Description: The dataset contains IMDb movie reviews with sentiment scores for seven aspects of filmmaking. Each review is labeled with sentiment scores for Cinematography, Direction, Story, Characters, Production Design, Unique Concept, and Emotions.
Usage for Rating a Movie
Install lowerated:
pip install lowerated
Now, you can use it like this:
from lowerated.rate.entity import Entity
# Example usage
if __name__ == "__main__":
some_movie_reviews = [
"bad movie!", "worse than other movies.", "bad.",
"best movie", "very good movie", "the cinematography was insane",
"story was so beautiful", "the emotional element was missing but cinematography was great",
"didn't feel a thing watching this",
"oooof, eliot and jessie were so good. the casting was the best",
"yo who designed the set, that was really good",
"such stories are rare to find"
]
# Create entity object (loads the whole pipeline)
# list of aspects. ('Cinematography', 'Direction', 'Story', 'Characters', 'Production Design', 'Unique Concept', 'Emotions')
entity = Entity(name="Movie")
rating = entity.rate(reviews=some_movie_reviews)
print("LM6: ", rating["LM6"])
Usage of Model
import torch
from transformers import DebertaV2ForSequenceClassification, DebertaV2Tokenizer
# Load the fine-tuned model and tokenizer
model = DebertaV2ForSequenceClassification.from_pretrained('Lowerated/deberta-v3-lm6')
tokenizer = DebertaV2Tokenizer.from_pretrained('Lowerated/deberta-v3-lm6')
# Ensure the model is in evaluation mode
model.eval()
# Define the label mapping
label_columns = ['Cinematography', 'Direction', 'Story', 'Characters', 'Production Design', 'Unique Concept', 'Emotions']
# Function for predicting sentiment scores
def predict_sentiment(review):
# Tokenize the input review
inputs = tokenizer(review, return_tensors='pt', truncation=True, padding=True)
# Disable gradient calculations for inference
with torch.no_grad():
# Get model outputs
outputs = model(**inputs)
# Get the prediction logits
predictions = outputs.logits.squeeze().detach().numpy()
return predictions
# Function to print predictions with labels
def print_predictions(review, predictions):
print(f"Review: {review}")
for label, score in zip(label_columns, predictions):
print(f"{label}: {score:.2f}")
review = "The cinematography was stunning, but the story was weak."
predictions = predict_sentiment(review)
print_predictions(review, predictions)
Performance
Evaluation Metric: Mean Squared Error (MSE)
MSE: 0.08594679832458496
Detailed Results
Cinematography:
- Precision: 0.96
- Recall: 0.97
- F1-score: 0.96
- Accuracy: 0.95
Direction:
- Precision: 0.93
- Recall: 0.97
- F1-score: 0.94
- Accuracy: 0.95
Story:
- Precision: 0.85
- Recall: 0.88
- F1-score: 0.85
- Accuracy: 0.85
Characters:
- Precision: 0.89
- Recall: 0.89
- F1-score: 0.89
- Accuracy: 0.90
Production Design:
- Precision: 0.95
- Recall: 0.98
- F1-score: 0.96
- Accuracy: 0.96
Unique Concept:
- Precision: 0.83
- Recall: 1.00
- F1-score: 0.89
- Accuracy: 1.00
Emotions:
- Precision: 0.76
- Recall: 0.87
- F1-score: 0.78
- Accuracy: 0.82
Test Results:
- Eval Loss: 0.08594681322574615
- Eval Model Preparation Time: 0.0011
- Eval MSE: 0.08594679832458496
- Eval Runtime: 23.1411
- Eval Samples per Second: 34.268
- Eval Steps per Second: 8.599
Intended Use
This model is intended for rating of movies across seven aspects of filmmaking. It can be used to provide a more nuanced understanding of viewer opinions and improve movie rating systems.
Limitations
While the model performs well on the evaluation dataset, its performance may vary on different datasets. Continuous monitoring and retraining with diverse data are recommended to maintain and improve its accuracy.
Future Work
Future improvements could focus on exploring alternative methods for handling neutral values, investigating advanced techniques for addressing missing ratings, enhancing sentiment analysis methods, and expanding the range of aspects analyzed.
Citation
If you use this model in your research, please cite it as follows:
@model{lowerated_deberta-v3-lm6,
author = {LOWERATED},
title = {deberta-v3-lm6},
year = {2024},
url = {https://huggingface.co/Lowerated/deberta-v3-lm6},
}
- Downloads last month
- 11