--- language: en tags: - text-classification - sentiment-analysis - 'sentiment ' - 'synthetic data ' license: apache-2.0 widget: - text: >- I absolutely loved this movie! The acting was superb and the plot was engaging. example_title: Very Positive Review - text: The service at this restaurant was terrible. I'll never go back. example_title: Very Negative Review - text: The product works as expected. Nothing special, but it gets the job done. example_title: Neutral Review - text: I'm somewhat disappointed with my purchase. It's not as good as I hoped. example_title: Negative Review - text: This book changed my life! I couldn't put it down and learned so much. example_title: Very Positive Review inference: parameters: temperature: 1 pipeline_tag: text-classification base_model: google-bert/bert-base-uncased --- # BERT-based Sentiment Classification Mode ## Model Details - **Model Name:** tabularisai/robust-sentiment-analysis - **Base Model:** bert-base-uncased - **Task:** Text Classification (Sentiment Analysis) - **Language:** English ## Model Description This model is a fine-tuned version of `bert-base-uncased` for sentiment analysis. **Trained only on syntethic data produced by SOTA LLMs: Llama3.1, Gemma2, and more** ### Training Data The model was fine-tuned on synthetic data, which allows for targeted training on a diverse range of sentiment expressions without the limitations often found in real-world datasets. ### Training Procedure - The model was fine-tuned for 5 epochs. - Achieved a train_acc_off_by_one (accuracy allowing for predictions off by one class) of approximately *0.95* on the validation dataset. ## Intended Use This model is designed for sentiment analysis tasks, particularly useful for: - Social media monitoring - Customer feedback analysis - Product review sentiment classification - Brand sentiment tracking ## How to Use Here's a quick example of how to use the model: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch # Load model and tokenizer model_name = "tabularisai/robust-sentiment-analysis" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) # Function to predict sentiment def predict_sentiment(text): inputs = tokenizer(text.lower(), return_tensors="pt", truncation=True, padding=True, max_length=512) with torch.no_grad(): outputs = model(**inputs) probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(probabilities, dim=-1).item() sentiment_map = {0: "Very Negative", 1: "Negative", 2: "Neutral", 3: "Positive", 4: "Very Positive"} return sentiment_map[predicted_class] # Example usage texts = [ "I absolutely loved this movie! The acting was superb and the plot was engaging.", "The service at this restaurant was terrible. I'll never go back.", "The product works as expected. Nothing special, but it gets the job done.", "I'm somewhat disappointed with my purchase. It's not as good as I hoped.", "This book changed my life! I couldn't put it down and learned so much." ] for text in texts: sentiment = predict_sentiment(text) print(f"Text: {text}") print(f"Sentiment: {sentiment}\n") ``` ## Model Performance The model demonstrates strong performance across various sentiment categories. Here are some example predictions: ``` 1. "I absolutely loved this movie! The acting was superb and the plot was engaging." Predicted Sentiment: Very Positive 2. "The service at this restaurant was terrible. I'll never go back." Predicted Sentiment: Very Negative 3. "The product works as expected. Nothing special, but it gets the job done." Predicted Sentiment: Neutral 4. "I'm somewhat disappointed with my purchase. It's not as good as I hoped." Predicted Sentiment: Negative 5. "This book changed my life! I couldn't put it down and learned so much." Predicted Sentiment: Very Positive ``` ## JS example ```html