File size: 3,004 Bytes
64b2bb5
 
be96cbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64b2bb5
be96cbe
 
 
 
 
 
46de3a4
 
 
 
 
 
 
 
 
 
 
be96cbe
 
 
 
 
46de3a4
be96cbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46de3a4
 
be96cbe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
library_name: transformers
tags:
- conversation
- empathetic
- roberta-base
license: apache-2.0
datasets:
- facebook/empathetic_dialogues
language:
- en
metrics:
- accuracy
- f1
- precision
- recall
base_model:
- FacebookAI/roberta-base
pipeline_tag: text-classification
---
# Model Card: RoBERTa Fine-Tuned on Empathetic Dialogues

## Model Description

This is a RoBERTa-based model fine-tuned on the Empathetic Dialogues dataset for conversational emotion classification. The model leverages the powerful RoBERTa architecture to understand and classify emotional contexts in conversational text.

### Emotion Classes
The model is trained to classify conversations into the following emotional categories:
- Surprised
- Angry
- Sad
- Joyful
- Anxious
- Hopeful
- Confident
- Disappointed

### Model Details
- **Base Model**: roberta-base
- **Task**: Emotion Classification in Conversations
- **Dataset**: Empathetic Dialogues
- **Training Approach**: Full Fine-Tuning
- **Number of Emotion Classes**: 8

### Model Performance

| Metric | Score |
|--------|-------|
| Test Loss | 0.8107 |
| Test Accuracy | 73.01% |
| Test F1 Score | 72.96% |
| Runtime | 10.99 seconds |
| Samples per Second | 61.68 |
| Steps per Second | 1.001 |

## Usage

### Hugging Face Transformers Pipeline

```python
from transformers import pipeline

# Initialize the emotion classification pipeline
classifier = pipeline(
    "text-classification", 
    model="Sidharthan/roberta-base-conv-emotion"
)

# Classify emotion in a conversation
text = "I'm feeling really frustrated with work lately."
result = classifier(text)
print(result)
```

### Direct Model Loading

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

# Load the model and tokenizer
model_name = "Sidharthan/roberta-base-conv-emotion"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Prepare input
text = "I'm feeling really frustrated with work lately."
inputs = tokenizer(text, return_tensors="pt")

# Predict
with torch.no_grad():
    outputs = model(**inputs)
    predictions = torch.softmax(outputs.logits, dim=1)
    predicted_class = torch.argmax(predictions, dim=1)
```

## Limitations
- Performance may vary with out-of-domain conversational contexts
- Emotion classification limited to the 8 specified emotional categories
- Relies on the specific emotional nuances in the Empathetic Dialogues dataset
- Requires careful interpretation in real-world applications

## Ethical Considerations
- Emotion classification can be subjective
- Potential for bias based on training data
- Should not be used for making critical decisions about individuals

## License
Apache 2.0

## Citations
```bibtex
@misc{roberta-base-conv-emotion,
  title={RoBERTa Fine-Tuned on Empathetic Dialogues},
  author={Sidharthan},
  year={2024},
  publisher={Hugging Face}
}
```

## Contact
For more information, please contact the model's author.