|
arXiv:2110.12687v1 [cs.CL] 25 Oct 2021Fine-tuning of Pre-trained Transformers for Hate, Offensi ve, |
|
and Profane Content Detection in English and Marathi |
|
Anna Glazkova1,*, Michael Kadantsev2, Maksim Glazkov3, |
|
1 University of Tyumen, Tyumen, Russia |
|
2 Thales Canada, Transportation Solutions, Toronto, Canad a |
|
3 Neuro.net, Nizhny Novgorod, Russia |
|
* [email protected] |
|
Abstract |
|
This paper describes neural models developed for the Hate Speech and Offensive Content |
|
Identification in English and Indo-Aryan Languages Shared Task 20 21. Our team called |
|
neuro-utmn-thales participated in two tasks on binary and fine-grained classification of |
|
English tweets that contain hate, offensive, and profane conten t (English Subtasks A & |
|
B) and one task on identification of problematic content in Marathi ( Marathi Subtask |
|
A). For English subtasks, we investigate the impact of additional co rpora for hate speech |
|
detection to fine-tune transformer models. We also apply a one-vs -rest approach based |
|
on Twitter-RoBERTa to discrimination between hate, profane and o ffensive posts. Our |
|
models ranked third in English Subtask A with the F1-score of 81.99% a nd ranked |
|
second in English Subtask B with the F1-score of 65.77%. For the Mar athi tasks, we |
|
propose a system based on the Language-Agnostic BERT Sentenc e Embedding (LaBSE). |
|
This model achieved the second result in Marathi Subtask A obtainin g an F1 of 88.08%. |
|
Introduction |
|
Social media has a greater impact on our society. Social networks g ive us almost limitless |
|
freedom of speech and contribute to the rapid dissemination of info rmation. However, |
|
these positive properties often lead to unhealthy usage of social m edia. Thus, hate |
|
speech spreading affects users’ psychological state, promote s violence, and reinforces |
|
hateful sentiments [4, 5]. This problem attracts many scholars to a pply modern tech- |
|
nologies in order to make social media safer. The Hate Speech and Of fensive Content |
|
Identification in English and Indo-Aryan Languages Shared Task (H ASOC) 2021 [27] |
|
aims to compare and analyze existing approaches to identifying hate speech not only |
|
for English, but also for other languages. It focused on detecting hate, offensive, and |
|
profane content in tweets, and offering six subtasks. We particip ated in three of them: |
|
•English Subtask A: identifying hate, offensive, and profane content from the |
|
post in English [24]. |
|
•English Subtask B: discrimination between hate, profane, and offensive posts |
|
in English. |
|
•Marathi Subtask A: identifying hate, offensive, and profane content from the |
|
post in Marathi [14]. |
|
1/10The source code for our models is freely available1. |
|
The paper is organized as follows. Section 2 contains a brief review of related works. |
|
Next, we describe our experiments on the binary and fine-grained c lassification of En- |
|
glish tweets in Section 3. In Section 4, we present our model for hat e, offensive, and |
|
profane language identification in Marathi. We conclude this paper in S ection 5. Finally, |
|
Section 6 contains acknowledgments. |
|
1 Related Works |
|
We briefly discuss works done related to harmful content detectio n in the past few |
|
years. Shared tasks related to hate speech and offensive langua ge detection from tweets |
|
was organized as a part of some workshops and conferences, suc h as FIRE [22, 23], Se- |
|
mEval, [3, 10], GermEval [40, 43], IberLEF [42], and OSACT [29]. The pa rticipants |
|
proposed a broad range of approaches from traditional machine le arning techniques |
|
(for example, Support Vector Machines [15, 38], Random Forest [34 ]) to various neural |
|
architectures (Convolutional Neural Networks, CNN [35]; Long Sh ort Term Memory, |
|
LSTM [26, 28]; Embeddings from Language Models, ELMo [6]; and Bidirec tional En- |
|
coder Representations from Transformers, BERT [19,36]). In mo st cases, BERT-based |
|
systems outperformed other approaches. |
|
Most research on hate speech detection continues to be based on English corpora. |
|
Despite this, the harmful content is distributed in different langua ges. Therefore, there |
|
have been previous attempts at creating corpora and developing m odels for hate speech |
|
detection in common non-English languages, such as Arabic [1, 29], G erman [22, 23, 40, |
|
43], Italian [7, 37], Spanish [3, 42], Hindi [22, 23], Tamil and Malayalam [22]. Several |
|
studies have focused on collecting hate speech corpora for Chines e [41], Portuguese [11], |
|
Polish [33], Turkish [8] and Russian [16] languages. |
|
2 English Subtasks A & B: Identification and Fine- |
|
grained Classification of Hate, Offensive, and Pro- |
|
fane Tweets |
|
The objective of English Subtasks A & B is to identify whether a tweet in English |
|
contains harmful content (Subtask A) and perform a fine-graine d classification of posts |
|
into three categories, including: hate, offensive, or profane (Su btask B). |
|
2.1 Data |
|
The dataset provided to the participants of the shared task cont ains 4355 manually |
|
annotated social media posts divided into training (3074) and test ( 1281) sets. Table 1 |
|
presents the data description. |
|
Further, we tested several data sampling techniques using differ ent hate speech cor- |
|
pora as additional training data. Firstly, we evaluated the joint use of multilingual data |
|
provided by the organizers of HASOC 2021, including the English, the Hindi, and the |
|
Marathi training sets. Secondly, as the training sets were highly imb alanced, we applied |
|
the positive class random oversampling technique so that each train ing batch contained |
|
approximately the same number of samples. Besides, we experiment ed with the seq2seq- |
|
based data augmentation technique [17]. For this purpose, we fine- tuned the BART-base |
|
model for the denoising reconstruction task where 40% of tokens are masked and the |
|
goal of the decoder is to reconstruct the original sequence. Sinc e the BART model [18] |
|
1https://github.com/ixomaxip/hasoc |
|
2/10Table 1. Data description. |
|
Label DescriptionNumber of examples |
|
(training set) |
|
Subtask A |
|
NOTNon Hate-Offensive: the post does not contain |
|
hate speech, profane, offensive content1102 |
|
HOFHate and Offensive: the post contains hate, |
|
offensive, or profane content.1972 |
|
Subtask B |
|
NONEThe post does not contain hate speech, profane, |
|
offensive content1102 |
|
HATE Hate speech: the post contains hate speech content. 542 |
|
OFFN Offensive: the post contains offensive content. 482 |
|
PRFN Profane: the post contains profane words. 948 |
|
Table 2. Hate-related dataset characteristics. |
|
Dataset Size Labels |
|
HASOC 2020 4522HOF - 50.4% |
|
NOT - 49.6% |
|
HatebaseTwitter 24783hate speech - 20.15% |
|
offensive language - 85.98% |
|
neither - 23.77% |
|
HatEval 130001 (hate speech) - 42.08% |
|
0 (not hate speech) - 57.92% |
|
OLID 14100OFF - 32.91% |
|
NOT - 67.09% |
|
already contains the ¡mask¿ token, we use it to replace mask tokens . We generated |
|
one synthetic example for every tweet in the training set. Thus, th e augmented data |
|
size is the same size as the size of the original training set. Finally, we e valuated the |
|
impact of additional training data, including: (a) the English dataset , used at HASOC |
|
2020 [22]; (b) HatebaseTwitter, based on the hate speech lexicon f rom Hatebase2[10]; |
|
(c) HatEval, a dataset presented at Semeval-2019 Task 5 [3] ; (d) Offensive Language |
|
Identification Dataset (OLID), used in the SemEval-2019 Task 6 (O ffensEval) [45]. All |
|
corpora except the HatebaseTwitter dataset contain non-inter sective classes. Besides, |
|
all listed datasets are collected from Twitter. A representative sa mpling of additional |
|
data is shown in Table 2. |
|
We preprocessed the datasets for Subtasks A & B in a similar manner . Inspired |
|
by [2], we used the following text preprocessing technique3: (a) removed all URLs; (b) |
|
replaced all user mentions with the $MENTION $placeholder. |
|
2.2 Models |
|
We conduct our experiments with neural models based on BERT [12] a s they have |
|
achieved state-of-the-art results in harmful content detectio n. For example, BERT- |
|
based models proved efficient at previous HASOC shared tasks [22 , 23] and SemEval |
|
[32,45]. |
|
We used the following models: |
|
2https://hatebase.org/ |
|
3https://pypi.org/project/tweet-preprocessor |
|
3/10Table 3. Model validation results for English Subtask A, %. |
|
Model F1 P R |
|
BERT 79.24 79.74 78.82 |
|
BERTweet 78.65 79.36 78.08 |
|
Twitter-RoBERTa 81.1 80.01 82.65 |
|
LaBSE (English dataset) 78.83 79.5 78.29 |
|
LaBSE (English + Hindi) 79.32 79.95 78.8 |
|
LaBSE (English, Hindi, and Marathi) 79.27 81.74 77.79 |
|
Adding extra data to Twitter-RoBERTa |
|
+ random oversampling 79.97 79.9 80.04 |
|
+ BART data augmentation 79.24 78.44 80.31 |
|
+ HASOC 2020 78.79 77.66 80.47 |
|
+ HatabaseTwitter 81.19 79.99 82.93 |
|
+ HatEval 74.31 75.53 73.64 |
|
+ OLID 79.29 78.17 80.93 |
|
•BERT base[12], a pre-trained model on BookCorpus [46] and English Wikipedia |
|
using a masked language modeling objective. |
|
•BERTweet base[30], a pre-trained language model for English tweets. The corpus |
|
used to pre-train BERTweet consists of 850M English Tweets includin g 845M |
|
Tweets streamed from 01/2012 to 08/2019 and 5M Tweets related to the COVID- |
|
19 pandemic. |
|
•Twitter-RoBERTa basefor Hate Speech Detection [2], a RoBERTa base[20] model |
|
trained on 58M tweets and fine-tuned for hate speech detection w ith the Tweet- |
|
Eval benchmark. |
|
•LaBSE [13], a language-agnostic BERT sentence embedding model su pporting 109 |
|
languages. |
|
2.3 Experiments |
|
For both Subtask A and Subtask B, we adopted pre-trained models from HuggingFace |
|
[44] and fine-tuned them using PyTorch [31]. We fine-tuned each pre -trained language |
|
model for 3 epochs with the learning rate of 2e-5 using the AdamW op timizer [21]. We |
|
set batch size to 32 and maximum sequence size to 64. To validate our models during |
|
the development phase, we divided labelled data using the train and va lidation split in |
|
the ratio 80:20. |
|
Table 3 shows the performance of our models on the validation subse t for Subtask A |
|
in terms of macro-averaging F1-score (F1), precision (P), and re call (R). As can be seen |
|
from the table, BERT, BERTweet, and LaBSE show very close result s during validation. |
|
Despite this, LaBSE jointly fine-tuned on three mixed multilingual dat asets shows the |
|
highest precision score. The use of Twitter-RoBERTa increases th e F1-score by 1.5-2.5% |
|
compared to other classification models. Based on this, we chose Tw itter-RoBERTa for |
|
further experiments. We found out that neither the random over sampling technique |
|
nor the use of the augmented and additional data shows a perform ance improvement, |
|
except the joint use of the original dataset and the HatebaseTwit ter dataset that gives |
|
an F1-score growth of 0.09% and a precision growth of 0.28% compar ed to basic Twitter- |
|
RoBERTa. |
|
For our official submission for Subtask A, we designed a soft-votin g ensemble of five |
|
Twitter-RoBERTa jointly fine-tuned on the original training set and the HatebaseTwit- |
|
4/10Table 4. Performance of our final models for English Subtasks A & B, official results, |
|
%. |
|
SubtaskF1 (our |
|
model)F1 (winning |
|
solution)P (our |
|
model)P (winning |
|
solution)Avg F1Number of |
|
submitted |
|
teamsRank |
|
A 81.99 83.05 84.68 84.14 75.7 56 3 |
|
B 65.77 66.57 66.32 66.88 57.07 37 2 |
|
ter dataset (see Table 4). For Subtask B, we used the following one -vs-rest approach to |
|
discrimination between hate, profane, and offensive posts. |
|
•First, we applied our Subtask A binary models to identify non hate-of fensive |
|
examples. |
|
•Second, we fine-tuned three Twitter-RoBERTa binary models to de limit exam- |
|
ples of hate-vs-profane, hate-vs-offensive, and offensive-v s-profane classes. The |
|
training dataset was extended with the HatebaseTwitter dataset . |
|
•Finally, we compared the results of binary models. If the result was d efined |
|
uniquely, we used it as a predicted label. Otherwise, we chose the labe l in propor- |
|
tion to the number of examples in the training set. |
|
This can be illustrated briefly by the following examples. |
|
–Let the models show the following results: |
|
∗hate-vs-profane →hate; |
|
∗hate-vs-offensive →hate; |
|
∗offensive-vs-profane →offensive. |
|
Thus, classes have the following votes: hate – 2, offensive - 1, pro fane – 0. |
|
Then we predict the HATE label. |
|
–If the results are: |
|
∗hate-vs-profane →profane; |
|
∗hate-vs-offensive →hate; |
|
∗offensive-vs-profane →offensive, |
|
we have the class votes, such as hate – 1, offensive - 1, profane – 1. Then we |
|
choose the PRFN label as the most common label in the training set. |
|
3 Marathi Subtask A: Identifying Hate, Offensive, |
|
and Profane Content from the Post |
|
3.1 Data |
|
For the Marathi task, we used the original training and test sets p rovided by the orga- |
|
nizers of the HASOC 2021. The whole dataset contains 2499 tweets , including: 1874 |
|
training and 625 test examples. The training set consists of 1205 te xts of the NOT |
|
class and 669 texts of the HOF class. We used raw data as an input fo r our models. |
|
Following [25,39], we experimented with the combination of the English, the Hindi, and |
|
the Marathi training sets provided by the organizers. |
|
5/10Table 5. Model validation results for Marathi Subtask A, %. |
|
Model F1 P R |
|
XLM-RoBERTa (Marathi dataset) 83.87 85.39 83.39 |
|
XLM-RoBERTa (Marathi + Hindi) 83.23 83.82 82.76 |
|
XLM-RoBERTa (Marathi + English) 84.83 85.03 84.64 |
|
XLM-RoBERTa (Marathi + Hindi + English) 84.35 84.82 83.95 |
|
LaBSE (Marathi) 87.76 87.82 87.68 |
|
LaBSE (Marathi + Hindi) 87.62 88.21 87.13 |
|
LaBSE (Marathi + English) 87.62 88.21 87.13 |
|
LaBSE (Marathi + Hindi + English) 86.34 86.63 86.08 |
|
Table 6. Performance of our final model for the Marathi Subtask A, offic ial results, %. |
|
F1 (our |
|
model)F1 (winning |
|
solution)P (our |
|
model)P (winning |
|
solution)Avg F1Number of |
|
submitted |
|
teamsRank |
|
88.08 91.44 87.58 91.82 82.55 25 2 |
|
3.2 Models |
|
We evaluated the following models: |
|
•XLM-RoBERTa base[9], a transformer-based multilingual masked language model |
|
supporting 100 languages. |
|
•LaBSE [13], a language-agnostic BERT sentence embedding model pr e-trained on |
|
texts in 109 languages. |
|
3.3 Experiments |
|
We experimented with the above-mentioned language models fine-tu ned on monolingual |
|
and multilingual data. For model evaluation during the development p hase, we used |
|
the random train and validation split in the ratio 80:20 with a fixed seed. We set the |
|
same model parameters as for English tasks. |
|
Table 5 illustrates the results. It can be seen that LaBSE outperfo rms XLM- |
|
RoBERTa in all cases. Moreover, the F1-score of LaBSE fine-tune d only on the Marathi |
|
dataset are higher than the results of LaBSE fine-tuned on multiling ual data. XLM- |
|
RoBERTa, on the other hand, mostly benefits from multilingual fine- tuning. |
|
For our final submission, we used a soft-voting ensemble of five LaB SE fine-tuned on |
|
the official Marathi dataset provided by the organizers of the co mpetition. The results |
|
of this model on the test set are shown in Table 6. |
|
Conclusion |
|
In this paper, we have presented the details about our participatio n in the HASOC |
|
Shared Task 2021. We have explored an application of domain-specif ic monolingual and |
|
multilingual BERT-based models to the tasks of binary and fine-grain ed classification of |
|
Twitter posts. We also proposed a one-vs-rest approach to discr imination between hate, |
|
offensive, and profane tweets. Further research can focus on analyzing the effectiveness |
|
of various text preprocessing techniques for harmful content d etection and exploring |
|
how different transfer learning approaches can affect classifica tion performance. |
|
6/10Acknowledgments |
|
The work on multi-label text classification was carried out by Anna Gla zkova and sup- |
|
ported by the grant of the President of the Russian Federation no . MK-637.2020.9. |
|
References |
|
1. N. Albadi, M. Kurdi, and S. Mishra. Are they our brothers? analys is and de- |
|
tection of religious hate speech in the Arabic twittersphere. In 2018 IEEE/ACM |
|
International Conference on Advances in Social Networks An alysis and Mining |
|
(ASONAM) , pages 69–76. IEEE, 2018. |
|
2. F. Barbieri, J. Camacho-Collados, L. E. Anke, and L. Neves. Twe etEval: Unified |
|
benchmark and comparative evaluation for tweet classification. In Proceedings |
|
of the 2020 Conference on Empirical Methods in Natural Langu age Processing: |
|
Findings , pages 1644–1650, 2020. |
|
3. V. Basile, C. Bosco, E. Fersini, N. Debora, V. Patti, F. M. R. Pard o, P. Rosso, |
|
M. Sanguinetti, et al. Semeval-2019 task 5: Multilingual detection of hate speech |
|
against immigrants and women in Twitter. In 13th International Workshop on |
|
Semantic Evaluation , pages 54–63, 2019. |
|
4. L. E. Beausoleil. Free, hateful, and posted: rethinking first ame ndment protection |
|
of hate speech in a social media world. BCL Rev. , 60:2101, 2019. |
|
5. M. Bilewicz and W. Soral. Hate speech epidemic. the dynamic effect s of deroga- |
|
tory language on intergroup relations and political radicalization. Political Psy- |
|
chology , 41:3–33, 2020. |
|
6. M. Bojkovsky and M. Pikuliak. STUFIIT at SemEval-2019 task 5: M ultilingual |
|
hate speech detection on twitter with MUSE and ELMo embeddings. I nProceed- |
|
ings of the 13th International Workshop on Semantic Evaluat ion, pages 464–468, |
|
2019. |
|
7. C. Bosco, D. Felice, F. Poletto, M. Sanguinetti, and T. Maurizio. O verview of the |
|
EVALITA 2018 hate speech detection task. In EVALITA 2018-Sixth Evaluation |
|
Campaign of Natural Language Processing and Speech Tools fo r Italian , volume |
|
2263, pages 1–9. CEUR, 2018. |
|
8. C ¸ . C ¸ ¨ oltekin. A corpus of Turkish offensive language on social media. In Proceed- |
|
ings of the 12th Language Resources and Evaluation Conferen ce, pages 6174–6184, |
|
2020. |
|
9. A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzm´ an, |
|
´E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov. Unsupervise d cross-lingual |
|
representation learning at scale. In Proceedings of the 58th Annual Meeting of |
|
the Association for Computational Linguistics , pages 8440–8451, 2020. |
|
10. T. Davidson, D. Warmsley, M. Macy, and I. Weber. Automated h ate speech de- |
|
tection and the problem of offensive language. In Proceedings of the International |
|
AAAI Conference on Web and Social Media , volume 11, 2017. |
|
11. R. P. de Pelle and V. P. Moreira. Offensive comments in the Brazilia n web: |
|
a dataset and baseline results. In Anais do VI Brazilian Workshop on Social |
|
Network Analysis and Mining . SBC, 2017. |
|
7/1012. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-t raining of |
|
deep bidirectional transformers for language understanding. arXiv preprint |
|
arXiv:1810.04805 , 2018. |
|
13. F. Feng, Y. Yang, D. Cer, N. Arivazhagan, and W. Wang. Langu age-agnostic |
|
BERT sentence embedding. arXiv preprint arXiv:2007.01852 , 2020. |
|
14. S. Gaikwad, T. Ranasinghe, M. Zampieri, and C. M. Homan. Cross -lingual offen- |
|
sive language identification for low resource languages: The case of Marathi. In |
|
Proceedings of RANLP , 2021. |
|
15. S. Hassan, Y. Samih, H. Mubarak, A. Abdelali, A. Rashed, and S. A. Chowdhury. |
|
Alt submission for osact shared task on offensive language detect ion. InProceed- |
|
ings of the 4th Workshop on Open-Source Arabic Corpora and Pr ocessing Tools, |
|
with a Shared Task on Offensive Language Detection , pages 61–65, 2020. |
|
16. L. Komalova, A. Glazkova, D. Morozov, R. Epifanov, L. Motovs kikh, and E. May- |
|
orova. Automated classification of potentially insulting speech acts on social net- |
|
work sites. In International Conference on Digital Transformation and Gl obal |
|
Society . Springer, 2021. |
|
17. V. Kumar, A. Choudhary, and E. Cho. Data augmentation using pre-trained |
|
transformer models. In Proceedings of the 2nd Workshop on Life-long Learning |
|
for Spoken Language Systems , pages 18–26, 2020. |
|
18. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Lev y, V. Stoy- |
|
anov, and L. Zettlemoyer. Bart: Denoising sequence-to-sequen ce pre-training for |
|
natural language generation, translation, and comprehension. I nProceedings of |
|
the 58th Annual Meeting of the Association for Computationa l Linguistics , pages |
|
7871–7880, 2020. |
|
19. P. Liu, W. Li, and L. Zou. NULI at SemEval-2019 task 6: Transfe r learning for |
|
offensive language detection using bidirectional transformers. I nProceedings of |
|
the 13th international workshop on semantic evaluation , pages 87–91, 2019. |
|
20. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Le wis, L. Zettle- |
|
moyer, and V. Stoyanov. Roberta: A robustly optimized bert pret raining ap- |
|
proach.arXiv preprint arXiv:1907.11692 , 2019. |
|
21. I. Loshchilov and F. Hutter. Decoupled weight decay regulariza tion. InInterna- |
|
tional Conference on Learning Representations , 2018. |
|
22. T. Mandl, S. Modha, A. Kumar M, and B. R. Chakravarthi. Overv iew of the |
|
HASOC track at FIRE 2020: Hate speech and offensive language ide ntification |
|
in Tamil, Malayalam, Hindi, English and German. In Forum for Information |
|
Retrieval Evaluation , pages 29–32, 2020. |
|
23. T. Mandl, S. Modha, P. Majumder, D. Patel, M. Dave, C. Mandlia, and A. Patel. |
|
Overview of the HASOC track at FIRE 2019: Hate speech and offen sive content |
|
identification in Indo-European languages. In Proceedings of the 11th forum for |
|
information retrieval evaluation , pages 14–17, 2019. |
|
24. T. Mandl, S. Modha, G. K. Shahi, H. Madhu, S. Satapara, P. Maj umder, |
|
J. Sch¨ afer, T. Ranasinghe, M. Zampieri, D. Nandini, and A. K. Jaisw al. Overview |
|
of the HASOC subtrack at FIRE 2021: Hate speech and offensive c ontent identi- |
|
fication in English and Indo-Aryan languages. In Working Notes of FIRE 2021 - |
|
Forum for Information Retrieval Evaluation . CEUR, December 2021. |
|
8/1025. S. Mishra, S. Prasad, and S. Mishra. Multilingual joint fine-tunin g of transformer |
|
models for identifying trolling, aggression and cyberbullying at TRAC 2 020. In |
|
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying , |
|
pages 120–125, 2020. |
|
26. A. K. Mishraa, S. Saumyab, and A. Kumara. IIIT DWD@ HASOC 2020: Iden- |
|
tifying offensive content in Indo-European languages. 2020. |
|
27. S. Modha, T. Mandl, G. K. Shahi, H. Madhu, S. Satapara, T. Ran asinghe, and |
|
M. Zampieri. Overview of the HASOC subtrack at FIRE 2021: Hate sp eech |
|
and offensive content identification in English and Indo-Aryan langu ages and |
|
conversational hate speech. In FIRE 2021: Forum for Information Retrieval |
|
Evaluation, Virtual Event, 13th-17th December 2021 . ACM, December 2021. |
|
28. A. Montejo-R´ aez, S. M. Jim´ enez-Zafra, M. A. Garc´ ıa-Cum breras, and M. C. D´ ıaz- |
|
Galiano. SINAI-DL at SemEval-2019 task 5: Recurrent networks a nd data aug- |
|
mentation by paraphrasing. In Proceedings of the 13th International Workshop |
|
on Semantic Evaluation , pages 480–483, 2019. |
|
29. H. Mubarak, K. Darwish, W. Magdy, T. Elsayed, and H. Al-Khalifa . Overview |
|
of OSACT4 Arabic offensive language detection shared task. In Proceedings of |
|
the 4th Workshop on Open-Source Arabic Corpora and Processi ng Tools, with a |
|
Shared Task on Offensive Language Detection , pages 48–52, 2020. |
|
30. D. Q. Nguyen, T. Vu, and A. T. Nguyen. BERTweet: A pre-train ed language |
|
model for English tweets. In Proceedings of the 2020 Conference on Empirical |
|
Methods in Natural Language Processing: System Demonstrat ions, pages 9–14, |
|
2020. |
|
31. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Ch anan, T. Killeen, |
|
Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, h igh- |
|
performance deep learning library. Advances in neural information processing |
|
systems , 32:8026–8037, 2019. |
|
32. J. Pavlopoulos, J. Sorensen, L. Laugier, and I. Androutsopo ulos. SemEval-2021 |
|
task 5: Toxic spans detection. In Proceedings of the 15th International Workshop |
|
on Semantic Evaluation (SemEval-2021) , pages 59–69, 2021. |
|
33. M. Ptaszynski, A. Pieciukiewicz, and P. Dyba/suppress la. Results of the P olEval 2019 |
|
shared task 6: First dataset and open shared task for automatic cyberbullying |
|
detection in Polish Twitter. 2019. |
|
34. B. Ray and A. Garain. JU at HASOC 2020: Deep learning with RoBER Ta |
|
and random forest for hate speech and offensive content identif ication in Indo- |
|
European languages. In FIRE (Working Notes) , pages 168–174, 2020. |
|
35. A. Ribeiro and N. Silva. Inf-hateval at semeval-2019 task 5: Co nvolutional neural |
|
networks for hate speech detection against women and immigrants on Twitter. In |
|
Proceedings of the 13th International Workshop on Semantic Evaluation , pages |
|
420–425, 2019. |
|
36. J. Risch, A. Stoll, M. Ziegele, and R. Krestel. hpiDEDIS at GermEv al 2019: |
|
Offensive language identification using a German BERT model. In KONVENS , |
|
2019. |
|
9/1037. M. Sanguinetti, F. Poletto, C. Bosco, V. Patti, and M. Stranisc i. An Italian |
|
Twitter corpus of hate speech against immigrants. In Proceedings of the Eleventh |
|
International Conference on Language Resources and Evalua tion (LREC 2018) , |
|
2018. |
|
38. F. Schmid, J. Thielemann, A. Mantwill, J. Xi, D. Labudde, and M. Sp ranger. |
|
Fosil-offensive language classification of German tweets combining S VMs and |
|
deep learning techniques. In KONVENS , 2019. |
|
39. P. Singh and P. Bhattacharyya. CFILT IIT Bombay at HASOC 20 20: Joint |
|
multitask learning of multilingual hate speech and offensive content detection |
|
system. In FIRE (Working Notes) , pages 325–330, 2020. |
|
40. J. M. Struß, M. Siegel, J. Ruppenhofer, M. Wiegand, M. Klenner , et al. Overview |
|
of GermEval task 2, 2019 shared task on the identification of offe nsive language. |
|
2019. |
|
41. X. Tang, X. Shen, Y. Wang, and Y. Yang. Categorizing offensiv e language in so- |
|
cial networks: A Chinese corpus, systems and an explanation tool. InChina Na- |
|
tional Conference on Chinese Computational Linguistics , pages 300–315. Springer, |
|
2020. |
|
42. M. Taul´ e, A. Ariza, M. Nofre, E. Amig´ o, and P. Rosso. Overv iew of detoxis at |
|
IberLEF 2021: Detection of toxicity in comments in Spanish. Procesamiento del |
|
Lenguaje Natural , 67:209–221, 2021. |
|
43. M. Wiegand, M. Siegel, and J. Ruppenhofer. Overview of the ger meval 2018 |
|
shared task on the identification of offensive language. In 14th Conference on |
|
Natural Language Processing KONVENS 2018 , 2018. |
|
44. T. Wolf, J. Chaumond, L. Debut, V. Sanh, C. Delangue, A. Moi, P . Cistac, |
|
M. Funtowicz, J. Davison, S. Shleifer, et al. Transformers: State -of-the-art natural |
|
language processing. In Proceedings of the 2020 Conference on Empirical Methods |
|
in Natural Language Processing: System Demonstrations , pages 38–45, 2020. |
|
45. M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, and R . Kumar. |
|
Semeval-2019 task 6: Identifying and categorizing offensive langu age in social |
|
media (OffensEval). In Proceedings of the 13th International Workshop on Se- |
|
mantic Evaluation , pages 75–86, 2019. |
|
46. Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. To rralba, and |
|
S. Fidler. Aligning books and movies: Towards story-like visual explan ations by |
|
watching movies and reading books. In Proceedings of the IEEE international |
|
conference on computer vision , pages 19–27, 2015. |
|
10/10 |