File size: 5,543 Bytes
d80beee 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 a5e747b 9a1fb61 a5e747b 9a1fb61 a5e747b 2c31e84 a5e747b 9a1fb61 a5e747b 9a1fb61 2c31e84 9a1fb61 a5e747b 9a1fb61 a5e747b 9a1fb61 a5e747b 9a1fb61 a5e747b 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 9a1fb61 2c31e84 d80beee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
license: mit
datasets:
- dair-ai/emotion
language:
- en
library_name: transformers
widget:
- text: I am so happy with the results!
- text: I am so pissed with the results!
tags:
- debarta
- debarta-v3-small
- emotions-classifier
---
# Fast Emotion-X: Fine-tuned DeBERTa V3 Small Based Emotion Detection
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) for emotion detection using the [dair-ai/emotion](https://huggingface.co/dair-ai/emotion) dataset.
## Overview
Fast Emotion-X is a state-of-the-art emotion detection model fine-tuned from Microsoft's DeBERTa V3 Small model. It is designed to accurately classify text into one of six emotional categories. Leveraging the robust capabilities of DeBERTa, this model is fine-tuned on a comprehensive emotion dataset, ensuring high accuracy and reliability.
## Model Details
- **Model Name:** `AnkitAI/deberta-v3-small-base-emotions-classifier`
- **Base Model:** `microsoft/deberta-v3-small`
- **Dataset:** [dair-ai/emotion](https://huggingface.co/dair-ai/emotion)
- **Fine-tuning:** The model is fine-tuned for emotion detection with a classification head for six emotional categories: anger, disgust, fear, joy, sadness, and surprise.
## Emotion Labels
- Anger
- Disgust
- Fear
- Joy
- Sadness
- Surprise
## Usage
You can use this model directly with the provided Python package or the Hugging Face `transformers` library.
### Installation
Install the package using pip:
```bash
pip install emotionclassifier
```
### Basic Usage
Here's an example of how to use the `emotionclassifier` to classify a single text:
```python
from emotionclassifier import EmotionClassifier
# Initialize the classifier with the default model
classifier = EmotionClassifier()
# Classify a single text
text = "I am very happy today!"
result = classifier.predict(text)
print("Emotion:", result['label'])
print("Confidence:", result['confidence'])
```
### Batch Processing
You can classify multiple texts at once using the `predict_batch` method:
```python
texts = ["I am very happy today!", "I am so sad."]
results = classifier.predict_batch(texts)
print("Batch processing results:", results)
```
### Visualization
To visualize the emotion distribution of a text:
```python
from emotionclassifier import plot_emotion_distribution
result = classifier.predict("I am very happy today!")
plot_emotion_distribution(result['probabilities'], classifier.labels.values())
```
### Command-Line Interface (CLI) Usage
You can also use the package from the command line:
```bash
emotionclassifier --model deberta-v3-small --text "I am very happy today!"
```
### DataFrame Integration
Integrate with pandas DataFrames to classify text columns:
```python
import pandas as pd
from emotionclassifier import DataFrameEmotionClassifier
df = pd.DataFrame({
'text': ["I am very happy today!", "I am so sad."]
})
classifier = DataFrameEmotionClassifier()
df = classifier.classify_dataframe(df, 'text')
print(df)
```
### Emotion Trends Over Time
Analyze and plot emotion trends over time:
```python
from emotionclassifier import EmotionTrends
texts = ["I am very happy today!", "I am feeling okay.", "I am very sad."]
trends = EmotionTrends()
emotions = trends.analyze_trends(texts)
trends.plot_trends(emotions)
```
### Fine-tuning
Fine-tune a pre-trained model on your own dataset:
```python
from emotionclassifier.fine_tune import fine_tune_model
# Define your training and validation datasets
train_dataset = ...
val_dataset = ...
# Fine-tune the model
fine_tune_model(classifier.model, classifier.tokenizer, train_dataset, val_dataset, output_dir='fine_tuned_model')
```
### Using transformers Library
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "AnkitAI/deberta-v3-small-base-emotions-classifier"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example usage
def predict_emotion(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
outputs = model(**inputs)
logits = outputs.logits
predictions = logits.argmax(dim=1)
return predictions
text = "I'm so happy with the results!"
emotion = predict_emotion(text)
print("Detected Emotion:", emotion)
```
## Training
The model was trained using the following parameters:
- **Learning Rate:** 2e-5
- **Batch Size:** 4
- **Weight Decay:** 0.01
- **Evaluation Strategy:** Epoch
### Training Details
- **Evaluation Loss:** 0.0858
- **Evaluation Runtime:** 110070.6349 seconds
- **Evaluation Samples/Second:** 78.495
- **Evaluation Steps/Second:** 2.453
- **Training Loss:** 0.1049
- **Evaluation Accuracy:** 94.6%
- **Evaluation Precision:** 94.8%
- **Evaluation Recall:** 94.5%
- **Evaluation F1 Score:** 94.7%
## Model Card Data
| Parameter | Value |
|-------------------------------|----------------------------|
| Model Name | microsoft/deberta-v3-small |
| Training Dataset | dair-ai/emotion |
| Number of Training Epochs | 20 |
| Learning Rate | 2e-5 |
| Per Device Train Batch Size | 4 |
| Evaluation Strategy | Epoch |
| Best Model Accuracy | 94.6% |
## License
This model is licensed under the [MIT License](LICENSE). |