Edit model card

Amazon-Food-Reviews-distilBERT-base for Sentiment Analysis

Table of Contents

Model Details

Model Description: This model is a fine-tuned version of distilbert-base-uncased on this Amazon food reviews dataset.

It achieves the following results on the evaluation set:

  • Loss: 0.08

  • Accuracy: 0.87

  • Precision: 0.71

  • Recall: 0.77

  • F1: 0.73

  • Developed by: Jiali Han

  • Model Type: Text Classification

  • Language(s): English

  • License: Apache-2.0

  • Parent Model: For more details about DistilBERT, please check out this model card.

  • Resources for more information:

Uses

Direct Use

This model can be used for sentiment analysis on Amazon food product reviews.

Misuse and Out-of-scope Use

The model should not be used to create hostile or alienating environments for people intentionally. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Risks, Limitations and Biases

Based on a few experimentations, we observed that this model could produce biased predictions that target underrepresented populations.

We strongly advise users to thoroughly probe these aspects of their usecases to evaluate this model's risks. We recommend looking at the following bias evaluation datasets as a place to start: WinoBias, WinoGender, Stereoset.

Training

Training Data

The author uses the Amazon food reviews dataset for the model.

Fine-tuning hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-5
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training process

Precision Recall F1-Score Support
-1 0.77 0.76 0.76 851
0 0.38 0.62 0.47 467
1 0.97 0.92 0.94 4985
accuracy 0.87 6303
macro avg 0.71 0.77 0.73 6303
weighted avg 0.90 0.87 0.88 6303

Training process

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
0.3730 1.00 10000 0.3706 0.8782 0.7040 0.7657 0.7295
0.3675 1.50 15000 0.3794 0.8775 0.7107 0.7631 0.7298
0.3631 2.00 20000 0.3517 0.8805 0.7145 0.7679 0.7226
0.2732 2.50 25000 0.6240 0.8509 0.6901 0.7784 0.7136
0.2913 3.00 30000 0.4759 0.8697 0.7132 0.7653 0.7239
0.2839 3.50 35000 0.4980 0.8755 0.7166 0.7693 0.7311
0.1983 4.00 40000 0.6700 0.8713 0.7035 0.7767 0.7290
0.2184 4.50 45000 0.5912 0.8888 0.7147 0.7498 0.7310
0.0891 4.85 48500 0.8237 0.8731 0.7065 0.7651 0.7258

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Tokenizers 0.15.0
Downloads last month
13
Safetensors
Model size
67M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jhan21/distilbert-base-uncased-finetuned-amazon-food-reviews

Finetuned
(6654)
this model