Edit model card

cardiffnlp/twitter-roberta-large-tempo-wic-latest

This is a RoBERTa-large model trained on 154M tweets until the end of December 2022 and finetuned for meaning shift detection (binary classification) on the TempoWIC dataset of SuperTweetEval. The original Twitter-larged RoBERTa model can be found here.

Labels

"id2label": { "0": "no", "1": "yes" }

Example

from transformers import pipeline
text_1 = "'In this bullpen, you should be able to ask why and understand why we do the things we do.' @Trisha_Ford 😍 #pitchstock2020 @user"
text_2 = "Castro needs to be the last bullpen guy to pitch."
target = "bullpen"
text_input = f"{text_1}</s>{text_2}</s>{target}"

pipe = pipeline('text-classification', model="cardiffnlp/twitter-roberta-large-tempo-wic-latest")
pipe(text_input)
>> [{'label': 'yes', 'score': 0.9783471822738647}]

Citation Information

Please cite the reference paper if you use this model.

@inproceedings{antypas2023supertweeteval,
  title={SuperTweetEval: A Challenging, Unified and Heterogeneous Benchmark for Social Media NLP Research},
  author={Dimosthenis Antypas and Asahi Ushio and Francesco Barbieri and Leonardo Neves and Kiamehr Rezaee and Luis Espinosa-Anke and Jiaxin Pei and Jose Camacho-Collados},
  booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
  year={2023}
}
Downloads last month
6
Safetensors
Model size
355M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train cardiffnlp/twitter-roberta-large-tempo-wic-latest

Collection including cardiffnlp/twitter-roberta-large-tempo-wic-latest