license: mit
datasets:
- KameronB/SITCC-dataset
language:
- en
tags:
- IT
- classification
- call center
- grammar
Synthetic IT Call Center Data Sentence Quality Predictor
A RoBERTa-base model fine-tuned on a synthetic dataset of good and bad sentences that would be found in IT call center tickets. This model aims to predict the quality of sentences in the context of IT support communications, providing a numerical score from 0.0 to 1.0, where 0 represents a poor quality sentence, and 1.0 represents an ideal quality sentence.
Model Background
This model was created out of the necessity to objectively measure the quality of IT call center journaling and improve overall customer service. By leveraging OpenAI's GPT-4 to simulate both effective and ineffective call center agent responses, and then using GPT-4-turbo to rank these responses, we've synthesized a unique dataset that reflects a wide range of possible interactions in an IT support context. The dataset comprises 1,464 items, each scored and annotated with insights into what constitutes quality journaling vs. poor journaling.
Approach
The foundation of this model is the RoBERTa-base transformer, chosen for its robust performance in natural language understanding tasks. I extended and fine-tuned the last four layers of RoBERTa to specialize in our sentence quality prediction task. This fine-tuning process involved manual adjustments and iterative training sessions to refine the model's accuracy and reduce the Mean Squared Error (MSE) on the validation set.
Performance
After several rounds of training and manual tweaks, the model achieved a validation MSE of approximately 0.02. This metric indicates the model's ability to closely predict the quality scores assigned by the simulated call center manager, with a lower MSE reflecting higher accuracy in those predictions.
Future Work
The journey to perfecting this model is ongoing. Plans to improve its performance include:
- Expanding the training dataset with more synthesized examples to cover a broader spectrum of potential customer interactions.
- Experimenting with adjusting and fine-tuning additional layers of the RoBERTa model to see if that yields better predictive accuracy.
- Exploring other evaluation metrics beyond MSE to ensure the model's predictions are as useful and actionable as possible in a real-world IT call center environment.
How to Use This Model
This model is designed for integration into IT call center software systems, where it can automatically score incoming and outgoing ticket responses for quality. To use this model:
- Ensure you have the Hugging Face Transformers library installed in your Python environment.
- Load the model using the following code snippet:
from __future__ import annotations
from transformers import RobertaConfig, RobertaModel, RobertaTokenizer, AutoModel, AutoTokenizer
import torch
# Add a custom regression head to RoBERTa
class SITCC(torch.nn.Module):
def __init__(self, model, config):
super(SITCC, self).__init__()
self.roberta = model
self.regressor = torch.nn.Linear(config.hidden_size, 1) # Outputs a single value
def forward(self, input_ids, attention_mask):
outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
sequence_output = outputs[1] # The last hidden-state is the first element of the output tuple
logits = self.regressor(sequence_output)
return logits
def init_model() -> SITCC:
# Load the model from huggingface
model_name = "KameronB/sitcc-roberta"
tokenizer = AutoTokenizer.from_pretrained(model_name, from_tf=False)
config = RobertaConfig.from_pretrained(model_name,)
# create the model based on the RoBERTa base model
model = SITCC(RobertaModel(config), config)
# fetch the statedict to apply the fine-tuned weights
state_dict = torch.hub.load_state_dict_from_url(f"https://huggingface.co/{model_name}/resolve/main/pytorch_model.bin")
# if running on cpu
# state_dict = torch.hub.load_state_dict_from_url(f"https://huggingface.co/{model_name}/resolve/main/pytorch_model.bin", map_location=torch.device('cpu'))
model.load_state_dict(state_dict)
return model, tokenizer
model, tokenizer = init_model()
def predict(sentences):
model.eval()
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
with torch.no_grad():
outputs = model(input_ids, attention_mask)
return outputs