Vadim Borisov commited on
Commit
6cc1ff8
•
1 Parent(s): 554c621

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- base_model: google-bert/bert-base-uncased
3
  language: en
4
  license: apache-2.0
5
  pipeline_tag: text-classification
@@ -28,11 +28,11 @@ inference:
28
  parameters:
29
  temperature: 1
30
  ---
31
- # 🚀 BERT-based Sentiment Classification Model: Unleashing the Power of Synthetic Data
32
 
33
  ## Model Details
34
  - **Model Name:** tabularisai/robust-sentiment-analysis
35
- - **Base Model:** bert-base-uncased
36
  - **Task:** Text Classification (Sentiment Analysis)
37
  - **Language:** English
38
  - **Number of Classes:** 5 (*Very Negative, Negative, Neutral, Positive, Very Positive*)
@@ -47,7 +47,7 @@ inference:
47
 
48
  ## Model Description
49
 
50
- This model is a fine-tuned version of `bert-base-uncased` for sentiment analysis. **Trained only on syntethic data produced by SOTA LLMs: Llama3.1, Gemma2, and more**
51
 
52
  ### Training Data
53
 
@@ -215,7 +215,7 @@ The model demonstrates strong performance across various sentiment categories. H
215
 
216
  ## Training Procedure
217
 
218
- The model was fine-tuned on synthetic data using the `bert-base-uncased` architecture. The training process involved:
219
 
220
  - Dataset: Synthetic data designed to cover a wide range of sentiment expressions
221
  - Training framework: PyTorch Lightning
 
1
  ---
2
+ base_model: distilbert/distilbert-base-uncased
3
  language: en
4
  license: apache-2.0
5
  pipeline_tag: text-classification
 
28
  parameters:
29
  temperature: 1
30
  ---
31
+ # 🚀 (distil)BERT-based Sentiment Classification Model: Unleashing the Power of Synthetic Data
32
 
33
  ## Model Details
34
  - **Model Name:** tabularisai/robust-sentiment-analysis
35
+ - **Base Model:** distilbert/distilbert-base-uncased
36
  - **Task:** Text Classification (Sentiment Analysis)
37
  - **Language:** English
38
  - **Number of Classes:** 5 (*Very Negative, Negative, Neutral, Positive, Very Positive*)
 
47
 
48
  ## Model Description
49
 
50
+ This model is a fine-tuned version of `distilbert/distilbert-base-uncased` for sentiment analysis. **Trained only on syntethic data produced by SOTA LLMs: Llama3.1, Gemma2, and more**
51
 
52
  ### Training Data
53
 
 
215
 
216
  ## Training Procedure
217
 
218
+ The model was fine-tuned on synthetic data using the `distilbert/distilbert-base-uncased` architecture. The training process involved:
219
 
220
  - Dataset: Synthetic data designed to cover a wide range of sentiment expressions
221
  - Training framework: PyTorch Lightning