Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.semeval-1.30.bib
https://aclanthology.org/2024.semeval-1.30/
@inproceedings{prabhu-etal-2024-scalar, title = "{SC}a{LAR} {NITK} at {S}em{E}val-2024 Task 5: Towards Unsupervised Question Answering system with Multi-level Summarization for Legal Text", author = "Prabhu, Manvith and Srinivasa, Haricharana and Kumar, Anand", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.30", doi = "10.18653/v1/2024.semeval-1.30", pages = "193--199", abstract = "This paper summarizes Team SCaLAR{'}s work on SemEval-2024 Task 5: Legal Argument Reasoning in Civil Procedure. To address this Binary Classification task, which was daunting due to the complexity of the Legal Texts involved, we propose a simple yet novel similarity and distance-based unsupervised approach to generate labels. Further, we explore the Multi-level fusion of Legal-Bert embeddings using ensemble features, including CNN, GRU, and LSTM. To address the lengthy nature of Legal explanation in the dataset, we introduce T5-based segment-wise summarization, which successfully retained crucial information, enhancing the model{'}s performance. Our unsupervised system witnessed a 20-point increase in macro F1-score on the development set and a 10-point increase on the test set, which is promising given its uncomplicated architecture.", }
This paper summarizes Team SCaLAR{'}s work on SemEval-2024 Task 5: Legal Argument Reasoning in Civil Procedure. To address this Binary Classification task, which was daunting due to the complexity of the Legal Texts involved, we propose a simple yet novel similarity and distance-based unsupervised approach to generate labels. Further, we explore the Multi-level fusion of Legal-Bert embeddings using ensemble features, including CNN, GRU, and LSTM. To address the lengthy nature of Legal explanation in the dataset, we introduce T5-based segment-wise summarization, which successfully retained crucial information, enhancing the model{'}s performance. Our unsupervised system witnessed a 20-point increase in macro F1-score on the development set and a 10-point increase on the test set, which is promising given its uncomplicated architecture.
[ "Prabhu, Manvith", "Srinivasa, Haricharana", "Kumar, An", "" ]
SCaLAR NITK at SemEval-2024 Task 5: Towards Unsupervised Question Answering system with Multi-level Summarization for Legal Text
semeval-1.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.31.bib
https://aclanthology.org/2024.semeval-1.31/
@inproceedings{kelious-okirim-2024-abdelhak, title = "Abdelhak at {S}em{E}val-2024 Task 9: Decoding Brainteasers, The Efficacy of Dedicated Models Versus {C}hat{GPT}", author = "Kelious, Abdelhak and Okirim, Mounir", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.31", doi = "10.18653/v1/2024.semeval-1.31", pages = "200--205", abstract = "This study introduces a dedicated model aimed at solving the BRAINTEASER task 9 , a novel challenge designed to assess models{'} lateral thinking capabilities through sentence and word puzzles. Our model demonstrates remarkable efficacy, securing Rank 1 in sentence puzzle solving during the test phase with an overall score of 0.98. Additionally, we explore the comparative performance of ChatGPT, specifically analyzing how variations in temperature settings affect its ability to engage in lateral thinking and problem-solving. Our findings indicate a notable performance disparity between the dedicated model and ChatGPT, underscoring the potential of specialized approaches in enhancing creative reasoning in AI.", }
This study introduces a dedicated model aimed at solving the BRAINTEASER task 9 , a novel challenge designed to assess models{'} lateral thinking capabilities through sentence and word puzzles. Our model demonstrates remarkable efficacy, securing Rank 1 in sentence puzzle solving during the test phase with an overall score of 0.98. Additionally, we explore the comparative performance of ChatGPT, specifically analyzing how variations in temperature settings affect its ability to engage in lateral thinking and problem-solving. Our findings indicate a notable performance disparity between the dedicated model and ChatGPT, underscoring the potential of specialized approaches in enhancing creative reasoning in AI.
[ "Kelious, Abdelhak", "Okirim, Mounir" ]
Abdelhak at SemEval-2024 Task 9: Decoding Brainteasers, The Efficacy of Dedicated Models Versus ChatGPT
semeval-1.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.32.bib
https://aclanthology.org/2024.semeval-1.32/
@inproceedings{saravanan-wilson-2024-ounlp, title = "{OUNLP} at {S}em{E}val-2024 Task 9: Retrieval-Augmented Generation for Solving Brain Teasers with {LLM}s", author = "Saravanan, Vineet and Wilson, Steven", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.32", doi = "10.18653/v1/2024.semeval-1.32", pages = "206--212", abstract = "The advancement of natural language processing has given rise to a variety of large language models (LLMs) with capabilities extending into the realm of complex problem-solving, including brainteasers that challenge not only linguistic fluency but also logical reasoning. This paper documents our submission to the SemEval 2024 Brainteaser task, in which we investigate the performance of state-of-the-art LLMs, such as GPT-3.5, GPT-4, and the Gemini model, on a diverse set of brainteasers using prompt engineering as a tool to enhance the models{'} problem-solving abilities. We experimented with a series of structured prompts ranging from basic to those integrating task descriptions and explanations. Through a comparative analysis, we sought to determine which combinations of model and prompt yielded the highest accuracy in solving these puzzles. Our findings provide a snapshot of the current landscape of AI problem-solving and highlight the nuanced nature of LLM performance, influenced by both the complexity of the tasks and the sophistication of the prompts employed.", }
The advancement of natural language processing has given rise to a variety of large language models (LLMs) with capabilities extending into the realm of complex problem-solving, including brainteasers that challenge not only linguistic fluency but also logical reasoning. This paper documents our submission to the SemEval 2024 Brainteaser task, in which we investigate the performance of state-of-the-art LLMs, such as GPT-3.5, GPT-4, and the Gemini model, on a diverse set of brainteasers using prompt engineering as a tool to enhance the models{'} problem-solving abilities. We experimented with a series of structured prompts ranging from basic to those integrating task descriptions and explanations. Through a comparative analysis, we sought to determine which combinations of model and prompt yielded the highest accuracy in solving these puzzles. Our findings provide a snapshot of the current landscape of AI problem-solving and highlight the nuanced nature of LLM performance, influenced by both the complexity of the tasks and the sophistication of the prompts employed.
[ "Saravanan, Vineet", "Wilson, Steven" ]
OUNLP at SemEval-2024 Task 9: Retrieval-Augmented Generation for Solving Brain Teasers with LLMs
semeval-1.32
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.33.bib
https://aclanthology.org/2024.semeval-1.33/
@inproceedings{benlahbib-etal-2024-nlp, title = "{NLP}-{LISAC} at {S}em{E}val-2024 Task 1: Transformer-based approaches for Determining Semantic Textual Relatedness", author = "Benlahbib, Abdessamad and Fahfouh, Anass and Alami, Hamza and Boumhidi, Achraf", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.33", doi = "10.18653/v1/2024.semeval-1.33", pages = "213--217", abstract = "This paper presents our system and findings for SemEval 2024 Task 1 Track A Supervised Semantic Textual Relatedness. The main objective of this task was to detect the degree of semantic relatedness between pairs of sentences. Our submitted models (ranked 6/24 in Algerian Arabic, 7/25 in Spanish, 12/23 in Moroccan Arabic, and 13/36 in English) consist of various transformer-based models including MARBERT-V2, mDeBERTa-V3-Base, DarijaBERT, and DeBERTa-V3-Large, fine-tuned using different loss functions including Huber Loss, Mean Absolute Error, and Mean Squared Error.", }
This paper presents our system and findings for SemEval 2024 Task 1 Track A Supervised Semantic Textual Relatedness. The main objective of this task was to detect the degree of semantic relatedness between pairs of sentences. Our submitted models (ranked 6/24 in Algerian Arabic, 7/25 in Spanish, 12/23 in Moroccan Arabic, and 13/36 in English) consist of various transformer-based models including MARBERT-V2, mDeBERTa-V3-Base, DarijaBERT, and DeBERTa-V3-Large, fine-tuned using different loss functions including Huber Loss, Mean Absolute Error, and Mean Squared Error.
[ "Benlahbib, Abdessamad", "Fahfouh, Anass", "Alami, Hamza", "Boumhidi, Achraf" ]
NLP-LISAC at SemEval-2024 Task 1: Transformer-based approaches for Determining Semantic Textual Relatedness
semeval-1.33
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.34.bib
https://aclanthology.org/2024.semeval-1.34/
@inproceedings{qian-etal-2024-zxq, title = "{ZXQ} at {S}em{E}val-2024 Task 7: Fine-tuning {GPT}-3.5-Turbo for Numerical Reasoning", author = "Qian, Zhen and Xu, Xiaofei and Zhang, Xiuzhen", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.34", doi = "10.18653/v1/2024.semeval-1.34", pages = "218--223", abstract = "In this paper, we present our system for the SemEval-2024 Task 7, i.e., NumEval subtask 3: Numericial Reasoning. Given a news article and its headline, the numerical reasoning task involves creating a system to compute the intentionally excluded number within the news headline. We propose a fine-tuned GPT-3.5-turbo model, specifically engineered to deduce missing numerals directly from the content of news article. The model is trained with a human-engineered prompt that itegrates the news content and the masked headline, tailoring its accuracy for the designated task. It achieves an accuracy of 0.94 on the test data and secures the second position in the official leaderboard. An examination on the system{'}s inference results reveals its commendable accuracy in identifying correct numerals when they can be directly {``}copied{''} from the articles. However, the error rates increase when it comes to some ambiguous operations such as rounding.", }
In this paper, we present our system for the SemEval-2024 Task 7, i.e., NumEval subtask 3: Numericial Reasoning. Given a news article and its headline, the numerical reasoning task involves creating a system to compute the intentionally excluded number within the news headline. We propose a fine-tuned GPT-3.5-turbo model, specifically engineered to deduce missing numerals directly from the content of news article. The model is trained with a human-engineered prompt that itegrates the news content and the masked headline, tailoring its accuracy for the designated task. It achieves an accuracy of 0.94 on the test data and secures the second position in the official leaderboard. An examination on the system{'}s inference results reveals its commendable accuracy in identifying correct numerals when they can be directly {``}copied{''} from the articles. However, the error rates increase when it comes to some ambiguous operations such as rounding.
[ "Qian, Zhen", "Xu, Xiaofei", "Zhang, Xiuzhen" ]
ZXQ at SemEval-2024 Task 7: Fine-tuning GPT-3.5-Turbo for Numerical Reasoning
semeval-1.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.35.bib
https://aclanthology.org/2024.semeval-1.35/
@inproceedings{ansari-etal-2024-bamo, title = "{BAMO} at {S}em{E}val-2024 Task 9: {BRAINTEASER}: A Novel Task Defying Common Sense", author = "Ansari, Baktash and Rostamkhani, Mohammadmostafa and Eetemadi, Sauleh", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.35", doi = "10.18653/v1/2024.semeval-1.35", pages = "224--232", abstract = "This paper outlines our approach to SemEval 2024 Task 9, BRAINTEASER: A Novel Task Defying Common Sense. The task aims to evaluate the ability of language models to think creatively. The dataset comprises multi-choice questions that challenge models to think {`}outside of the box{'}. We fine-tune 2 models, BERT and RoBERTa Large. Next, we employ a Chain of Thought (CoT) zero-shot prompting approach with 6 large language models, such as GPT-3.5, Mixtral, and Llama2. Finally, we utilize ReConcile, a technique that employs a {`}round table conference{'} approach with multiple agents for zero-shot learning, to generate consensus answers among 3 selected language models. Our best method achieves an overall accuracy of 85 percent on the sentence puzzles subtask.", }
This paper outlines our approach to SemEval 2024 Task 9, BRAINTEASER: A Novel Task Defying Common Sense. The task aims to evaluate the ability of language models to think creatively. The dataset comprises multi-choice questions that challenge models to think {`}outside of the box{'}. We fine-tune 2 models, BERT and RoBERTa Large. Next, we employ a Chain of Thought (CoT) zero-shot prompting approach with 6 large language models, such as GPT-3.5, Mixtral, and Llama2. Finally, we utilize ReConcile, a technique that employs a {`}round table conference{'} approach with multiple agents for zero-shot learning, to generate consensus answers among 3 selected language models. Our best method achieves an overall accuracy of 85 percent on the sentence puzzles subtask.
[ "Ansari, Baktash", "Rostamkhani, Mohammadmostafa", "Eetemadi, Sauleh" ]
BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense
semeval-1.35
Poster
2406.04947
[ "https://github.com/baktash81/SemEval_2024_BRAINTEASER" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.36.bib
https://aclanthology.org/2024.semeval-1.36/
@inproceedings{yang-etal-2024-yangqi, title = "yangqi at {S}em{E}val-2024 Task 9: Simulate Human Thinking by Large Language Model for Lateral Thinking Challenges", author = "Yang, Qi and Zeng, Jingjie and Yang, Liang and Lin, Hongfei", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.36", doi = "10.18653/v1/2024.semeval-1.36", pages = "233--238", abstract = "This paper describes our system used in the SemEval-2024 Task 9 on two sub-tasks, BRAINTEASER: A Novel Task Defying Common Sense. In this work, we developed a system SHTL, which means simulate human thinking capabilities by Large Language Model (LLM). Our approach bifurcates into two main components: Common Sense Reasoning and Rationalize Defying Common Sense. To mitigate the hallucinations of LLM, we implemented a strategy that combines Retrieval-augmented Generation (RAG) with the the Self-Adaptive In-Context Learning (SAICL), thereby sufficiently leveraging the powerful language ability of LLM. The effectiveness of our method has been validated by its performance on the test set, with an average performance on two subtasks that is 30.1 higher than ChatGPT setting zero-shot and only 0.8 lower than that of humans.", }
This paper describes our system used in the SemEval-2024 Task 9 on two sub-tasks, BRAINTEASER: A Novel Task Defying Common Sense. In this work, we developed a system SHTL, which means simulate human thinking capabilities by Large Language Model (LLM). Our approach bifurcates into two main components: Common Sense Reasoning and Rationalize Defying Common Sense. To mitigate the hallucinations of LLM, we implemented a strategy that combines Retrieval-augmented Generation (RAG) with the the Self-Adaptive In-Context Learning (SAICL), thereby sufficiently leveraging the powerful language ability of LLM. The effectiveness of our method has been validated by its performance on the test set, with an average performance on two subtasks that is 30.1 higher than ChatGPT setting zero-shot and only 0.8 lower than that of humans.
[ "Yang, Qi", "Zeng, Jingjie", "Yang, Liang", "Lin, Hongfei" ]
yangqi at SemEval-2024 Task 9: Simulate Human Thinking by Large Language Model for Lateral Thinking Challenges
semeval-1.36
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.37.bib
https://aclanthology.org/2024.semeval-1.37/
@inproceedings{siino-2024-badrock, title = "{B}ad{R}ock at {S}em{E}val-2024 Task 8: {D}istil{BERT} to Detect Multigenerator, Multidomain and Multilingual Black-Box Machine-Generated Text", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.37", doi = "10.18653/v1/2024.semeval-1.37", pages = "239--245", abstract = "The rise of Large Language Models (LLMs) has brought about a notable shift, rendering them increasingly ubiquitous and readily accessible. This accessibility has precipitated a surge in machine-generated content across diverse platforms encompassing news outlets, social media platforms, question-answering forums, educational platforms, and even academic domains. Recent iterations of LLMs, exemplified by entities like ChatGPT and GPT-4, exhibit a remarkable ability to produce coherent and contextually relevant responses across a broad spectrum of user inquiries. The fluidity and sophistication of these generated texts position LLMs as compelling candidates for substituting human labor in numerous applications. Nevertheless, this proliferation of machine-generated content has raised apprehensions regarding potential misuse, including the dissemination of misinformation and disruption of educational ecosystems. Given that humans marginally outperform random chance in discerning between machine-generated and human-authored text, there arises a pressing imperative to develop automated systems capable of accurately distinguishing machine-generated text. This pursuit is driven by the overarching objective of curbing the potential misuse of machine-generated content. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a DistilBERT model for classifying each sample in the test set provided. Our submission is able to reach an accuracy equal to 0.754 in place of the worst result obtained at the competition that is equal to 0.231.", }
The rise of Large Language Models (LLMs) has brought about a notable shift, rendering them increasingly ubiquitous and readily accessible. This accessibility has precipitated a surge in machine-generated content across diverse platforms encompassing news outlets, social media platforms, question-answering forums, educational platforms, and even academic domains. Recent iterations of LLMs, exemplified by entities like ChatGPT and GPT-4, exhibit a remarkable ability to produce coherent and contextually relevant responses across a broad spectrum of user inquiries. The fluidity and sophistication of these generated texts position LLMs as compelling candidates for substituting human labor in numerous applications. Nevertheless, this proliferation of machine-generated content has raised apprehensions regarding potential misuse, including the dissemination of misinformation and disruption of educational ecosystems. Given that humans marginally outperform random chance in discerning between machine-generated and human-authored text, there arises a pressing imperative to develop automated systems capable of accurately distinguishing machine-generated text. This pursuit is driven by the overarching objective of curbing the potential misuse of machine-generated content. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a DistilBERT model for classifying each sample in the test set provided. Our submission is able to reach an accuracy equal to 0.754 in place of the worst result obtained at the competition that is equal to 0.231.
[ "Siino, Marco" ]
BadRock at SemEval-2024 Task 8: DistilBERT to Detect Multigenerator, Multidomain and Multilingual Black-Box Machine-Generated Text
semeval-1.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.38.bib
https://aclanthology.org/2024.semeval-1.38/
@inproceedings{ebrahim-joy-2024-warwicknlp, title = "{W}arwick{NLP} at {S}em{E}val-2024 Task 1: Low-Rank Cross-Encoders for Efficient Semantic Textual Relatedness", author = "Ebrahim, Fahad and Joy, Mike", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.38", doi = "10.18653/v1/2024.semeval-1.38", pages = "246--252", abstract = "This work participates in SemEval 2024 Task 1 on Semantic Textural Relatedness (STR) in Track A (supervised regression) in two languages, English and Moroccan Arabic. The task consists of providing a score of how two sentences relate to each other. The system developed in this work leveraged a cross-encoder with a merged fine-tuned Low-Rank Adapter (LoRA). The system was ranked eighth in English with a Spearman coefficient of 0.842, while Moroccan Arabic was ranked seventh with a score of 0.816. Moreover, various experiments were conducted to see the impact of different models and adapters on the performance and accuracy of the system.", }
This work participates in SemEval 2024 Task 1 on Semantic Textural Relatedness (STR) in Track A (supervised regression) in two languages, English and Moroccan Arabic. The task consists of providing a score of how two sentences relate to each other. The system developed in this work leveraged a cross-encoder with a merged fine-tuned Low-Rank Adapter (LoRA). The system was ranked eighth in English with a Spearman coefficient of 0.842, while Moroccan Arabic was ranked seventh with a score of 0.816. Moreover, various experiments were conducted to see the impact of different models and adapters on the performance and accuracy of the system.
[ "Ebrahim, Fahad", "Joy, Mike" ]
WarwickNLP at SemEval-2024 Task 1: Low-Rank Cross-Encoders for Efficient Semantic Textual Relatedness
semeval-1.38
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.39.bib
https://aclanthology.org/2024.semeval-1.39/
@inproceedings{markchom-etal-2024-nu, title = "{NU}-{RU} at {S}em{E}val-2024 Task 6: Hallucination and Related Observable Overgeneration Mistake Detection Using Hypothesis-Target Similarity and {S}elf{C}heck{GPT}", author = "Markchom, Thanet and Jung, Subin and Liang, Huizhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.39", doi = "10.18653/v1/2024.semeval-1.39", pages = "253--260", abstract = "One of the key challenges in Natural Language Generation (NLG) is {``}hallucination,{''} in which the generated output appears fluent and grammatically sound but may contain incorrect information. To address this challenge, {``}SemEval-2024 Task 6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes{''} is introduced. This task focuses on detecting overgeneration hallucinations in texts generated from Large Language Models for various NLG tasks. To tackle this task, this paper proposes two methods: (1) hypothesis-target similarity, which measures text similarity between a generated text (hypothesis) and an intended reference text (target), and (2) a SelfCheckGPT-based method to assess hallucinations via predefined prompts designed for different NLG tasks. Experiments were conducted on the dataset provided in this task. The results show that both of the proposed methods can effectively detect hallucinations in LLM-generated texts with a possibility for improvement.", }
One of the key challenges in Natural Language Generation (NLG) is {``}hallucination,{''} in which the generated output appears fluent and grammatically sound but may contain incorrect information. To address this challenge, {``}SemEval-2024 Task 6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes{''} is introduced. This task focuses on detecting overgeneration hallucinations in texts generated from Large Language Models for various NLG tasks. To tackle this task, this paper proposes two methods: (1) hypothesis-target similarity, which measures text similarity between a generated text (hypothesis) and an intended reference text (target), and (2) a SelfCheckGPT-based method to assess hallucinations via predefined prompts designed for different NLG tasks. Experiments were conducted on the dataset provided in this task. The results show that both of the proposed methods can effectively detect hallucinations in LLM-generated texts with a possibility for improvement.
[ "Markchom, Thanet", "Jung, Subin", "Liang, Huizhi" ]
NU-RU at SemEval-2024 Task 6: Hallucination and Related Observable Overgeneration Mistake Detection Using Hypothesis-Target Similarity and SelfCheckGPT
semeval-1.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.40.bib
https://aclanthology.org/2024.semeval-1.40/
@inproceedings{zhao-etal-2024-ncl, title = "{NCL}{\_}{NLP} at {S}em{E}val-2024 Task 7: {C}o{T}-{N}um{HG}: A {C}o{T}-Based {SFT} Training Strategy with Large Language Models for Number-Focused Headline Generation", author = "Zhao, Junzhe and Wang, Yingxi and Liang, Huizhi and Rusnachenko, Nicolay", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.40", doi = "10.18653/v1/2024.semeval-1.40", pages = "261--269", abstract = "Headline Generation is an essential task in Natural Language Processing (NLP), where models often exhibit limited ability to accurately interpret numerals, leading to inaccuracies in generated headlines. This paper introduces CoT-NumHG, a training strategy leveraging the Chain of Thought (CoT) paradigm for Supervised Fine-Tuning (SFT) of large language models. This approach is aimed at enhancing numeral perception, interpretability, accuracy, and the generation of structured outputs. Presented in SemEval-2024 Task 7 (task 3): Numeral-Aware Headline Generation (English), this challenge is divided into two specific subtasks. The first subtask focuses on numerical reasoning, requiring models to precisely calculate and fill in the missing numbers in news headlines, while the second subtask targets the generation of complete headlines. Utilizing the same training strategy across both subtasks, this study primarily explores the first subtask as a demonstration of our training strategy. Through this competition, our CoT-NumHG-Mistral-7B model attained an accuracy rate of 94{\%}, underscoring the effectiveness of our proposed strategy.", }
Headline Generation is an essential task in Natural Language Processing (NLP), where models often exhibit limited ability to accurately interpret numerals, leading to inaccuracies in generated headlines. This paper introduces CoT-NumHG, a training strategy leveraging the Chain of Thought (CoT) paradigm for Supervised Fine-Tuning (SFT) of large language models. This approach is aimed at enhancing numeral perception, interpretability, accuracy, and the generation of structured outputs. Presented in SemEval-2024 Task 7 (task 3): Numeral-Aware Headline Generation (English), this challenge is divided into two specific subtasks. The first subtask focuses on numerical reasoning, requiring models to precisely calculate and fill in the missing numbers in news headlines, while the second subtask targets the generation of complete headlines. Utilizing the same training strategy across both subtasks, this study primarily explores the first subtask as a demonstration of our training strategy. Through this competition, our CoT-NumHG-Mistral-7B model attained an accuracy rate of 94{\%}, underscoring the effectiveness of our proposed strategy.
[ "Zhao, Junzhe", "Wang, Yingxi", "Liang, Huizhi", "Rusnachenko, Nicolay" ]
NCL_NLP at SemEval-2024 Task 7: CoT-NumHG: A CoT-Based SFT Training Strategy with Large Language Models for Number-Focused Headline Generation
semeval-1.40
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.41.bib
https://aclanthology.org/2024.semeval-1.41/
@inproceedings{byun-2024-byun, title = "Byun at {S}em{E}val-2024 Task 6: Text Classification on Hallucinating Text with Simple Data Augmentation", author = "Byun, Cheolyeon", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.41", doi = "10.18653/v1/2024.semeval-1.41", pages = "270--273", abstract = "This paper aims to classify sentences to see if it is hallucinating, meaning the generative language model has output text that has very little to do with the user{'}s input, or not. This classification task is part of the Semeval 2024{'}s task on Hallucinations and Related Observable Over-generation Mistakes, AKA SHROOM, which aims to improve awkward-sounding texts generated by AI. This paper will first go over the first attempt at creating predictions, then show the actual scores achieved after submitting the first attempt results to Semeval, then finally go over potential improvements to be made.", }
This paper aims to classify sentences to see if it is hallucinating, meaning the generative language model has output text that has very little to do with the user{'}s input, or not. This classification task is part of the Semeval 2024{'}s task on Hallucinations and Related Observable Over-generation Mistakes, AKA SHROOM, which aims to improve awkward-sounding texts generated by AI. This paper will first go over the first attempt at creating predictions, then show the actual scores achieved after submitting the first attempt results to Semeval, then finally go over potential improvements to be made.
[ "Byun, Cheolyeon" ]
Byun at SemEval-2024 Task 6: Text Classification on Hallucinating Text with Simple Data Augmentation
semeval-1.41
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.42.bib
https://aclanthology.org/2024.semeval-1.42/
@inproceedings{maksimov-etal-2024-deeppavlov, title = "{D}eep{P}avlov at {S}em{E}val-2024 Task 6: Detection of Hallucinations and Overgeneration Mistakes with an Ensemble of Transformer-based Models", author = "Maksimov, Ivan and Konovalov, Vasily and Glinskii, Andrei", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.42", doi = "10.18653/v1/2024.semeval-1.42", pages = "274--278", abstract = "The inclination of large language models (LLMs) to produce mistaken assertions, known as hallucinations, can be problematic. These hallucinations could potentially be harmful since sporadic factual inaccuracies within the generated text might be concealed by the overall coherence of the content, making it immensely challenging for users to identify them. The goal of the SHROOM shared-task is to detect grammatically sound outputs that contain incorrect or unsupported semantic information. Although there are a lot of existing hallucination detectors in generated AI content, we found out that pretrained Natural Language Inference (NLI) models yet exhibit success in detecting hallucinations. Moreover their ensemble outperforms more complicated models.", }
The inclination of large language models (LLMs) to produce mistaken assertions, known as hallucinations, can be problematic. These hallucinations could potentially be harmful since sporadic factual inaccuracies within the generated text might be concealed by the overall coherence of the content, making it immensely challenging for users to identify them. The goal of the SHROOM shared-task is to detect grammatically sound outputs that contain incorrect or unsupported semantic information. Although there are a lot of existing hallucination detectors in generated AI content, we found out that pretrained Natural Language Inference (NLI) models yet exhibit success in detecting hallucinations. Moreover their ensemble outperforms more complicated models.
[ "Maksimov, Ivan", "Konovalov, Vasily", "Glinskii, Andrei" ]
DeepPavlov at SemEval-2024 Task 6: Detection of Hallucinations and Overgeneration Mistakes with an Ensemble of Transformer-based Models
semeval-1.42
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.43.bib
https://aclanthology.org/2024.semeval-1.43/
@inproceedings{sengupta-etal-2024-hijli, title = "{HIJLI}{\_}{JU} at {S}em{E}val-2024 Task 7: Enhancing Quantitative Question Answering Using Fine-tuned {BERT} Models", author = "Sengupta, Partha and Sarkar, Sandip and Das, Dipankar", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.43", doi = "10.18653/v1/2024.semeval-1.43", pages = "279--284", abstract = "In data and numerical analysis, Quantitative Question Answering (QQA) becomes a crucial instrument that provides deep insights for analyzing large datasets and helps make well-informed decisions in industries such as finance, healthcare, and business. This paper explores the {``}HIJLI{\_}JU{''} team{'}s involvement in NumEval Task 1 within SemEval 2024, with a particular emphasis on quantitative comprehension. Specifically, our method addresses numerical complexities by fine-tuning a BERT model for sophisticated multiple-choice question answering, leveraging the Hugging Face ecosystem. The effectiveness of our QQA model is assessed using a variety of metrics, with an emphasis on the f1{\_}score() of the scikit-learn library. Thorough analysis of the macro-F1, micro-F1, weighted-F1, average, and binary-F1 scores yields detailed insights into the model{'}s performance in a range of question formats.", }
In data and numerical analysis, Quantitative Question Answering (QQA) becomes a crucial instrument that provides deep insights for analyzing large datasets and helps make well-informed decisions in industries such as finance, healthcare, and business. This paper explores the {``}HIJLI{\_}JU{''} team{'}s involvement in NumEval Task 1 within SemEval 2024, with a particular emphasis on quantitative comprehension. Specifically, our method addresses numerical complexities by fine-tuning a BERT model for sophisticated multiple-choice question answering, leveraging the Hugging Face ecosystem. The effectiveness of our QQA model is assessed using a variety of metrics, with an emphasis on the f1{\_}score() of the scikit-learn library. Thorough analysis of the macro-F1, micro-F1, weighted-F1, average, and binary-F1 scores yields detailed insights into the model{'}s performance in a range of question formats.
[ "Sengupta, Partha", "Sarkar, S", "ip", "Das, Dipankar" ]
HIJLI_JU at SemEval-2024 Task 7: Enhancing Quantitative Question Answering Using Fine-tuned BERT Models
semeval-1.43
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.44.bib
https://aclanthology.org/2024.semeval-1.44/
@inproceedings{li-etal-2024-ncl, title = "{NCL} Team at {S}em{E}val-2024 Task 3: Fusing Multimodal Pre-training Embeddings for Emotion Cause Prediction in Conversations", author = "Li, Shu and Liao, Zicen and Liang, Huizhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.44", doi = "10.18653/v1/2024.semeval-1.44", pages = "285--290", abstract = "In this study, we introduce an MLP approach for extracting multimodal cause utterances in conversations, utilizing the multimodal conversational emotion causes from the ECF dataset. Our research focuses on evaluating a bi-modal framework that integrates video and audio embeddings to analyze emotional expressions within dialogues. The core of our methodology involves the extraction of embeddings from pre-trained models for each modality, followed by their concatenation and subsequent classification via an MLP network. We compared the accuracy performances across different modality combinations including text-audio-video, video-audio, and audio only.", }
In this study, we introduce an MLP approach for extracting multimodal cause utterances in conversations, utilizing the multimodal conversational emotion causes from the ECF dataset. Our research focuses on evaluating a bi-modal framework that integrates video and audio embeddings to analyze emotional expressions within dialogues. The core of our methodology involves the extraction of embeddings from pre-trained models for each modality, followed by their concatenation and subsequent classification via an MLP network. We compared the accuracy performances across different modality combinations including text-audio-video, video-audio, and audio only.
[ "Li, Shu", "Liao, Zicen", "Liang, Huizhi" ]
NCL Team at SemEval-2024 Task 3: Fusing Multimodal Pre-training Embeddings for Emotion Cause Prediction in Conversations
semeval-1.44
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.45.bib
https://aclanthology.org/2024.semeval-1.45/
@inproceedings{siino-2024-deberta, title = "{D}e{BERT}a at {S}em{E}val-2024 Task 9: Using {D}e{BERT}a for Defying Common Sense", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.45", doi = "10.18653/v1/2024.semeval-1.45", pages = "291--297", abstract = "The widespread success of language models has spurred the natural language processing (NLP) community to tackle tasks demanding implicit and intricate reasoning, drawing upon human-like common-sense mechanisms. While endeavors in vertical thinking tasks have garnered considerable attention, there has been a relative dearth of exploration in lateral thinking puzzles. To address this gap, we introduce BRAINTEASER: a multiple-choice Question Answering task meticulously crafted to evaluate the model{'}s capacity for lateral thinking and its ability to challenge default common-sense associations. At the SemEval-2024 Task 9, for the first subtask (i.e., Sentence Puzzle) the organizers asked the participants to develop models able to reply to multi-answer brain-teasing questions. For this purpose, we propose the application of a DeBERTa model in a zero-shot configuration. Our proposed approach is able to reach an overall score of 0.250. Suggesting a significant room for improvements in future works.", }
The widespread success of language models has spurred the natural language processing (NLP) community to tackle tasks demanding implicit and intricate reasoning, drawing upon human-like common-sense mechanisms. While endeavors in vertical thinking tasks have garnered considerable attention, there has been a relative dearth of exploration in lateral thinking puzzles. To address this gap, we introduce BRAINTEASER: a multiple-choice Question Answering task meticulously crafted to evaluate the model{'}s capacity for lateral thinking and its ability to challenge default common-sense associations. At the SemEval-2024 Task 9, for the first subtask (i.e., Sentence Puzzle) the organizers asked the participants to develop models able to reply to multi-answer brain-teasing questions. For this purpose, we propose the application of a DeBERTa model in a zero-shot configuration. Our proposed approach is able to reach an overall score of 0.250. Suggesting a significant room for improvements in future works.
[ "Siino, Marco" ]
DeBERTa at SemEval-2024 Task 9: Using DeBERTa for Defying Common Sense
semeval-1.45
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.46.bib
https://aclanthology.org/2024.semeval-1.46/
@inproceedings{siino-2024-transmistral, title = "{T}rans{M}istral at {S}em{E}val-2024 Task 10: Using Mistral 7{B} for Emotion Discovery and Reasoning its Flip in Conversation", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.46", doi = "10.18653/v1/2024.semeval-1.46", pages = "298--304", abstract = "The EDiReF shared task at SemEval 2024 comprises three subtasks: Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations, Emotion Flip Reasoning (EFR) in Hindi-English code-mixed conversations, and EFR in English conversations. The objectives for the ERC and EFR tasks are defined as follows: 1) Emotion Recognition in Conversation (ERC): In this task, participants are tasked with assigning an emotion to each utterance within a dialogue from a predefined set of possible emotions. The goal is to accurately recognize and label the emotions expressed in the conversation; 2) Emotion Flip Reasoning (EFR): This task involves identifying the trigger utterance(s) for an emotion-flip within a multi-party conversation dialogue. Participants are required to pinpoint the specific utterance(s) that serve as catalysts for a change in emotion during the conversation. In this paper we only address the first subtask (ERC) making use of an online translation strategy followed by the application of a Mistral 7B model together with a few-shot prompt strategy. Our approach obtains an F1 of 0.36, eventually exhibiting further room for improvements.", }
The EDiReF shared task at SemEval 2024 comprises three subtasks: Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations, Emotion Flip Reasoning (EFR) in Hindi-English code-mixed conversations, and EFR in English conversations. The objectives for the ERC and EFR tasks are defined as follows: 1) Emotion Recognition in Conversation (ERC): In this task, participants are tasked with assigning an emotion to each utterance within a dialogue from a predefined set of possible emotions. The goal is to accurately recognize and label the emotions expressed in the conversation; 2) Emotion Flip Reasoning (EFR): This task involves identifying the trigger utterance(s) for an emotion-flip within a multi-party conversation dialogue. Participants are required to pinpoint the specific utterance(s) that serve as catalysts for a change in emotion during the conversation. In this paper we only address the first subtask (ERC) making use of an online translation strategy followed by the application of a Mistral 7B model together with a few-shot prompt strategy. Our approach obtains an F1 of 0.36, eventually exhibiting further room for improvements.
[ "Siino, Marco" ]
TransMistral at SemEval-2024 Task 10: Using Mistral 7B for Emotion Discovery and Reasoning its Flip in Conversation
semeval-1.46
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.47.bib
https://aclanthology.org/2024.semeval-1.47/
@inproceedings{lu-kao-2024-0x, title = "0x.{Y}uan at {S}em{E}val-2024 Task 2: Agents Debating can reach consensus and produce better outcomes in Medical {NLI} task", author = "Lu, Yu-an and Kao, Hung-yu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.47", doi = "10.18653/v1/2024.semeval-1.47", pages = "305--310", abstract = "In this paper, we introduce a multi-agent debating framework, experimenting on SemEval 2024 Task 2. This innovative system employs a collaborative approach involving expert agents from various medical fields to analyze Clinical Trial Reports (CTRs). Our methodology emphasizes nuanced and comprehensive analysis by leveraging the diverse expertise of agents like Biostatisticians and Medical Linguists. Results indicate that our collaborative model surpasses the performance of individual agents in terms of Macro F1-score. Additionally, our analysis suggests that while initial debates often mirror majority decisions, the debating process refines these outcomes, demonstrating the system{'}s capability for in-depth analysis beyond simple majority rule. This research highlights the potential of AI collaboration in specialized domains, particularly in medical text interpretation.", }
In this paper, we introduce a multi-agent debating framework, experimenting on SemEval 2024 Task 2. This innovative system employs a collaborative approach involving expert agents from various medical fields to analyze Clinical Trial Reports (CTRs). Our methodology emphasizes nuanced and comprehensive analysis by leveraging the diverse expertise of agents like Biostatisticians and Medical Linguists. Results indicate that our collaborative model surpasses the performance of individual agents in terms of Macro F1-score. Additionally, our analysis suggests that while initial debates often mirror majority decisions, the debating process refines these outcomes, demonstrating the system{'}s capability for in-depth analysis beyond simple majority rule. This research highlights the potential of AI collaboration in specialized domains, particularly in medical text interpretation.
[ "Lu, Yu-an", "Kao, Hung-yu" ]
0x.Yuan at SemEval-2024 Task 2: Agents Debating can reach consensus and produce better outcomes in Medical NLI task
semeval-1.47
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.48.bib
https://aclanthology.org/2024.semeval-1.48/
@inproceedings{tian-etal-2024-tw, title = "{TW}-{NLP} at {S}em{E}val-2024 Task10: Emotion Recognition and Emotion Reversal Inference in Multi-Party Dialogues.", author = "Tian, Wei and Ji, Peiyu and Zhang, Lei and Jian, Yue", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.48", doi = "10.18653/v1/2024.semeval-1.48", pages = "311--315", abstract = "In multidimensional dialogues, emotions serve not only as crucial mediators of emotional exchanges but also carry rich information. Therefore, accurately identifying the emotions of interlocutors and understanding the triggering factors of emotional changes are paramount. This study focuses on the tasks of multilingual dialogue emotion recognition and emotion reversal reasoning based on provocateurs, aiming to enhance the accuracy and depth of emotional understanding in dialogues. To achieve this goal, we propose a novel model, MBERT-TextRCNN-PL, designed to effectively capture emotional information of interlocutors. Additionally, we introduce XGBoost-EC (Emotion Capturer) to identify emotion provocateurs, thereby delving deeper into the causal relationships behind emotional changes. By comparing with state-of-the-art models, our approach demonstrates significant improvements in recognizing dialogue emotions and provocateurs, offering new insights and methodologies for multilingual dialogue emotion understanding and emotion reversal research.", }
In multidimensional dialogues, emotions serve not only as crucial mediators of emotional exchanges but also carry rich information. Therefore, accurately identifying the emotions of interlocutors and understanding the triggering factors of emotional changes are paramount. This study focuses on the tasks of multilingual dialogue emotion recognition and emotion reversal reasoning based on provocateurs, aiming to enhance the accuracy and depth of emotional understanding in dialogues. To achieve this goal, we propose a novel model, MBERT-TextRCNN-PL, designed to effectively capture emotional information of interlocutors. Additionally, we introduce XGBoost-EC (Emotion Capturer) to identify emotion provocateurs, thereby delving deeper into the causal relationships behind emotional changes. By comparing with state-of-the-art models, our approach demonstrates significant improvements in recognizing dialogue emotions and provocateurs, offering new insights and methodologies for multilingual dialogue emotion understanding and emotion reversal research.
[ "Tian, Wei", "Ji, Peiyu", "Zhang, Lei", "Jian, Yue" ]
TW-NLP at SemEval-2024 Task10: Emotion Recognition and Emotion Reversal Inference in Multi-Party Dialogues.
semeval-1.48
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.49.bib
https://aclanthology.org/2024.semeval-1.49/
@inproceedings{baloun-etal-2024-uwba, title = "{UWBA} at {S}em{E}val-2024 Task 3: Dialogue Representation and Multimodal Fusion for Emotion Cause Analysis", author = "Baloun, Josef and Martinek, Jiri and Lenc, Ladislav and Kral, Pavel and Zeman, Mat{\v{e}}j and Vl{\v{c}}ek, Luk{\'a}{\v{s}}", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.49", doi = "10.18653/v1/2024.semeval-1.49", pages = "316--325", abstract = "In this paper, we present an approach for solving SemEval-2024 Task 3: The Competition of Multimodal Emotion Cause Analysis in Conversations. The task includes two subtasks that focus on emotion-cause pair extraction using text, video, and audio modalities. Our approach is composed of encoding all modalities (MFCC and Wav2Vec for audio, 3D-CNN for video, and transformer-based models for text) and combining them in an utterance-level fusion module. The model is then optimized for link and emotion prediction simultaneously. Our approach achieved 6th place in both subtasks. The full leaderboard can be found at https://codalab.lisn.upsaclay.fr/competitions/16141{\#}results", }
In this paper, we present an approach for solving SemEval-2024 Task 3: The Competition of Multimodal Emotion Cause Analysis in Conversations. The task includes two subtasks that focus on emotion-cause pair extraction using text, video, and audio modalities. Our approach is composed of encoding all modalities (MFCC and Wav2Vec for audio, 3D-CNN for video, and transformer-based models for text) and combining them in an utterance-level fusion module. The model is then optimized for link and emotion prediction simultaneously. Our approach achieved 6th place in both subtasks. The full leaderboard can be found at https://codalab.lisn.upsaclay.fr/competitions/16141{\#}results
[ "Baloun, Josef", "Martinek, Jiri", "Lenc, Ladislav", "Kral, Pavel", "Zeman, Mat{\\v{e}}j", "Vl{\\v{c}}ek, Luk{\\'a}{\\v{s}}" ]
UWBA at SemEval-2024 Task 3: Dialogue Representation and Multimodal Fusion for Emotion Cause Analysis
semeval-1.49
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.50.bib
https://aclanthology.org/2024.semeval-1.50/
@inproceedings{nguyen-zhang-2024-gavx, title = "{GAV}x at {S}em{E}val-2024 Task 10: Emotion Flip Reasoning via Stacked Instruction Finetuning of {LLM}s", author = "Nguyen, Vy and Zhang, Xiuzhen", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.50", doi = "10.18653/v1/2024.semeval-1.50", pages = "326--336", abstract = "The Emotion Flip Reasoning task at SemEval 2024 aims at identifying the utterance(s) that trigger a speaker to shift from an emotion to another in a multi-party conversation. The spontaneous, informal, and occasionally multilingual dynamics of conversations make the task challenging. In this paper, we propose a supervised stacked instruction-based framework to finetune large language models to tackle this task. Utilising the annotated datasets provided, we curate multiple instruction sets involving chain-of-thoughts, feedback, and self-evaluation instructions, for a multi-step finetuning pipeline. We utilise the self-consistency inference strategy to enhance prediction consistency. Experimental results reveal commendable performance, achieving mean F1 scores of 0.77 and 0.76 for triggers in the Hindi-English and English-only tracks respectively. This led to us earning the second highest ranking in both tracks.", }
The Emotion Flip Reasoning task at SemEval 2024 aims at identifying the utterance(s) that trigger a speaker to shift from an emotion to another in a multi-party conversation. The spontaneous, informal, and occasionally multilingual dynamics of conversations make the task challenging. In this paper, we propose a supervised stacked instruction-based framework to finetune large language models to tackle this task. Utilising the annotated datasets provided, we curate multiple instruction sets involving chain-of-thoughts, feedback, and self-evaluation instructions, for a multi-step finetuning pipeline. We utilise the self-consistency inference strategy to enhance prediction consistency. Experimental results reveal commendable performance, achieving mean F1 scores of 0.77 and 0.76 for triggers in the Hindi-English and English-only tracks respectively. This led to us earning the second highest ranking in both tracks.
[ "Nguyen, Vy", "Zhang, Xiuzhen" ]
GAVx at SemEval-2024 Task 10: Emotion Flip Reasoning via Stacked Instruction Finetuning of LLMs
semeval-1.50
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.51.bib
https://aclanthology.org/2024.semeval-1.51/
@inproceedings{su-zhou-2024-nlp, title = "{NLP}{\_}{STR}{\_}team{S} at {S}em{E}val-2024 Task1: Semantic Textual Relatedness based on {MASK} Prediction and {BERT} Model", author = "Su, Lianshuang and Zhou, Xiaobing", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.51", doi = "10.18653/v1/2024.semeval-1.51", pages = "337--341", abstract = "This paper describes our participation in the SemEval-2024 Task 1, {``}Semantic Textual Relatedness for African and Asian Languages.{''} This task detects the degree of semantic relatedness between pairs of sentences. Our approach is to take out the sentence pairs of each instance to construct a new sentence as the prompt template, use MASK to predict the correlation between the two sentences, use the BERT pre-training model to process and calculate the text sequence, and use the synonym replacement method in text data augmentation to expand the size of the data set. We participate in English in track A, which uses a supervised approach, and the Spearman Correlation on the test set is 0.809.", }
This paper describes our participation in the SemEval-2024 Task 1, {``}Semantic Textual Relatedness for African and Asian Languages.{''} This task detects the degree of semantic relatedness between pairs of sentences. Our approach is to take out the sentence pairs of each instance to construct a new sentence as the prompt template, use MASK to predict the correlation between the two sentences, use the BERT pre-training model to process and calculate the text sequence, and use the synonym replacement method in text data augmentation to expand the size of the data set. We participate in English in track A, which uses a supervised approach, and the Spearman Correlation on the test set is 0.809.
[ "Su, Lianshuang", "Zhou, Xiaobing" ]
NLP_STR_teamS at SemEval-2024 Task1: Semantic Textual Relatedness based on MASK Prediction and BERT Model
semeval-1.51
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.52.bib
https://aclanthology.org/2024.semeval-1.52/
@inproceedings{mehta-etal-2024-halu, title = "Halu-{NLP} at {S}em{E}val-2024 Task 6: {M}eta{C}heck{GPT} - A Multi-task Hallucination Detection using {LLM} uncertainty and meta-models", author = "Mehta, Rahul and Hoblitzell, Andrew and O{'}keefe, Jack and Jang, Hyeju and Varma, Vasudeva", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.52", doi = "10.18653/v1/2024.semeval-1.52", pages = "342--348", abstract = "Hallucinations in large language models(LLMs) have recently become a significantproblem. A recent effort in this directionis a shared task at Semeval 2024 Task 6,SHROOM, a Shared-task on Hallucinationsand Related Observable Overgeneration Mis-takes. This paper describes our winning so-lution ranked 1st and 2nd in the 2 sub-tasksof model agnostic and model aware tracks re-spectively. We propose a meta-regressor basedensemble of LLMs based on a random forestalgorithm that achieves the highest scores onthe leader board. We also experiment with var-ious transformer based models and black boxmethods like ChatGPT, Vectara, and others. Inaddition, we perform an error analysis com-paring ChatGPT against our best model whichshows the limitations of the former", }
Hallucinations in large language models(LLMs) have recently become a significantproblem. A recent effort in this directionis a shared task at Semeval 2024 Task 6,SHROOM, a Shared-task on Hallucinationsand Related Observable Overgeneration Mis-takes. This paper describes our winning so-lution ranked 1st and 2nd in the 2 sub-tasksof model agnostic and model aware tracks re-spectively. We propose a meta-regressor basedensemble of LLMs based on a random forestalgorithm that achieves the highest scores onthe leader board. We also experiment with var-ious transformer based models and black boxmethods like ChatGPT, Vectara, and others. Inaddition, we perform an error analysis com-paring ChatGPT against our best model whichshows the limitations of the former
[ "Mehta, Rahul", "Hoblitzell, Andrew", "O{'}keefe, Jack", "Jang, Hyeju", "Varma, Vasudeva" ]
Halu-NLP at SemEval-2024 Task 6: MetaCheckGPT - A Multi-task Hallucination Detection using LLM uncertainty and meta-models
semeval-1.52
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.53.bib
https://aclanthology.org/2024.semeval-1.53/
@inproceedings{wang-etal-2024-qfnu, title = "{QFNU}{\_}{CS} at {S}em{E}val-2024 Task 3: A Hybrid Pre-trained Model based Approach for Multimodal Emotion-Cause Pair Extraction Task", author = "Wang, Zining and Zhao, Yanchao and Han, Guanghui and Song, Yang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.53", doi = "10.18653/v1/2024.semeval-1.53", pages = "349--353", abstract = "This article presents the solution of Qufu Normal University for the Multimodal Sentiment Cause Analysis competition in SemEval2024 Task 3.The competition aims to extract emotion-cause pairs from dialogues containing text, audio, and video modalities. To cope with this task, we employ a hybrid pre-train model based approach. Specifically, we first extract and fusion features from dialogues based on BERT, BiLSTM, openSMILE and C3D. Then, we adopt BiLSTM and Transformer to extract the candidate emotion-cause pairs. Finally, we design a filter to identify the correct emotion-cause pairs. The evaluation results show that, we achieve a weighted average F1 score of 0.1786 and an F1 score of 0.1882 on CodaLab.", }
This article presents the solution of Qufu Normal University for the Multimodal Sentiment Cause Analysis competition in SemEval2024 Task 3.The competition aims to extract emotion-cause pairs from dialogues containing text, audio, and video modalities. To cope with this task, we employ a hybrid pre-train model based approach. Specifically, we first extract and fusion features from dialogues based on BERT, BiLSTM, openSMILE and C3D. Then, we adopt BiLSTM and Transformer to extract the candidate emotion-cause pairs. Finally, we design a filter to identify the correct emotion-cause pairs. The evaluation results show that, we achieve a weighted average F1 score of 0.1786 and an F1 score of 0.1882 on CodaLab.
[ "Wang, Zining", "Zhao, Yanchao", "Han, Guanghui", "Song, Yang" ]
QFNU_CS at SemEval-2024 Task 3: A Hybrid Pre-trained Model based Approach for Multimodal Emotion-Cause Pair Extraction Task
semeval-1.53
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.54.bib
https://aclanthology.org/2024.semeval-1.54/
@inproceedings{tran-tran-2024-newbieml, title = "{N}ewbie{ML} at {S}em{E}val-2024 Task 8: Ensemble Approach for Multidomain Machine-Generated Text Detection", author = "Tran, Bao and Tran, Nhi", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.54", doi = "10.18653/v1/2024.semeval-1.54", pages = "354--360", abstract = "Large Language Models (LLMs) are becoming popular and easily accessible, leading to a large growth of machine-generated content over various channels. Along with this popularity, the potential misuse is also a challenge for us. In this paper, we use SemEval 2024 task A monolingual dataset with comparative study between some machine learning model with feature extraction and develop an ensemble method for our system. Our system achieved 84.31{\%} accuracy score in the test set, ranked 36th of 137 participants. Our code is available at: https://github.com/baoivy/SemEval-Task8", }
Large Language Models (LLMs) are becoming popular and easily accessible, leading to a large growth of machine-generated content over various channels. Along with this popularity, the potential misuse is also a challenge for us. In this paper, we use SemEval 2024 task A monolingual dataset with comparative study between some machine learning model with feature extraction and develop an ensemble method for our system. Our system achieved 84.31{\%} accuracy score in the test set, ranked 36th of 137 participants. Our code is available at: https://github.com/baoivy/SemEval-Task8
[ "Tran, Bao", "Tran, Nhi" ]
NewbieML at SemEval-2024 Task 8: Ensemble Approach for Multidomain Machine-Generated Text Detection
semeval-1.54
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.55.bib
https://aclanthology.org/2024.semeval-1.55/
@inproceedings{takahashi-2024-hidetsune, title = "Hidetsune at {S}em{E}val-2024 Task 3: A Simple Textual Approach to Emotion Classification and Emotion Cause Analysis in Conversations Using Machine Learning and Next Sentence Prediction", author = "Takahashi, Hidetsune", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.55", doi = "10.18653/v1/2024.semeval-1.55", pages = "361--364", abstract = "In this system paper for SemEval-2024 Task3 subtask 2, I present my simple textual approach to emotion classification and emotioncause analysis in conversations using machinelearning and next sentence prediction. I train aSpaCy model for emotion classification and usenext sentence prediction with BERT for emotion cause analysis. While speaker names andaudio-visual clips are given in addition to textof the conversations, my approach uses textualdata only to test my methodology to combinemachine learning with next sentence prediction.This paper reveals both strengths and weaknesses of my trial, suggesting a direction offuture studies to improve my introductory solution.", }
In this system paper for SemEval-2024 Task3 subtask 2, I present my simple textual approach to emotion classification and emotioncause analysis in conversations using machinelearning and next sentence prediction. I train aSpaCy model for emotion classification and usenext sentence prediction with BERT for emotion cause analysis. While speaker names andaudio-visual clips are given in addition to textof the conversations, my approach uses textualdata only to test my methodology to combinemachine learning with next sentence prediction.This paper reveals both strengths and weaknesses of my trial, suggesting a direction offuture studies to improve my introductory solution.
[ "Takahashi, Hidetsune" ]
Hidetsune at SemEval-2024 Task 3: A Simple Textual Approach to Emotion Classification and Emotion Cause Analysis in Conversations Using Machine Learning and Next Sentence Prediction
semeval-1.55
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.56.bib
https://aclanthology.org/2024.semeval-1.56/
@inproceedings{vaidya-etal-2024-clteam1, title = "{CLT}eam1 at {S}em{E}val-2024 Task 10: Large Language Model based ensemble for Emotion Detection in {H}inglish", author = "Vaidya, Ankit and Gokhale, Aditya and Desai, Arnav and Shukla, Ishaan and Sonawane, Sheetal", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.56", doi = "10.18653/v1/2024.semeval-1.56", pages = "365--369", abstract = "This paper outlines our approach for the ERC subtask of the SemEval 2024 EdiREF Shared Task. In this sub-task, an emotion had to be assigned to an utterance which was the part of a dialogue. The utterance had to be classified into one of the following classes- disgust, contempt, anger, neutral, joy, sadness, fear, surprise. Our proposed system makes use of an ensemble of language specific RoBERTA and BERT models to tackle the problem. A weighted F1-score of 44{\%} was achieved by our system in this task. We conducted comprehensive ablations and suggested directions of future work. Our codebase is available publicly.", }
This paper outlines our approach for the ERC subtask of the SemEval 2024 EdiREF Shared Task. In this sub-task, an emotion had to be assigned to an utterance which was the part of a dialogue. The utterance had to be classified into one of the following classes- disgust, contempt, anger, neutral, joy, sadness, fear, surprise. Our proposed system makes use of an ensemble of language specific RoBERTA and BERT models to tackle the problem. A weighted F1-score of 44{\%} was achieved by our system in this task. We conducted comprehensive ablations and suggested directions of future work. Our codebase is available publicly.
[ "Vaidya, Ankit", "Gokhale, Aditya", "Desai, Arnav", "Shukla, Ishaan", "Sonawane, Sheetal" ]
CLTeam1 at SemEval-2024 Task 10: Large Language Model based ensemble for Emotion Detection in Hinglish
semeval-1.56
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.57.bib
https://aclanthology.org/2024.semeval-1.57/
@inproceedings{takahashi-2024-hidetsune-semeval, title = "Hidetsune at {S}em{E}val-2024 Task 4: An Application of Machine Learning to Multilingual Propagandistic Memes Identification Using Machine Translation", author = "Takahashi, Hidetsune", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.57", doi = "10.18653/v1/2024.semeval-1.57", pages = "370--373", abstract = "In this system paper for SemEval-2024 Task4 subtask 2b, I present my approach to identifying propagandistic memes in multiple languages. I firstly establish a baseline for Englishand then implement the model into other languages (Bulgarian, North Macedonian and Arabic) by using machine translation. Data fromother subtasks (subtask 1, subtask 2a) are alsoused in addition to data for this subtask, andadditional data from Kaggle are concatenatedto these in order to enhance the model. Theresults show a high reliability of my Englishbaseline and a room for improvement of itsimplementation.", }
In this system paper for SemEval-2024 Task4 subtask 2b, I present my approach to identifying propagandistic memes in multiple languages. I firstly establish a baseline for Englishand then implement the model into other languages (Bulgarian, North Macedonian and Arabic) by using machine translation. Data fromother subtasks (subtask 1, subtask 2a) are alsoused in addition to data for this subtask, andadditional data from Kaggle are concatenatedto these in order to enhance the model. Theresults show a high reliability of my Englishbaseline and a room for improvement of itsimplementation.
[ "Takahashi, Hidetsune" ]
Hidetsune at SemEval-2024 Task 4: An Application of Machine Learning to Multilingual Propagandistic Memes Identification Using Machine Translation
semeval-1.57
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.58.bib
https://aclanthology.org/2024.semeval-1.58/
@inproceedings{takahashi-2024-hidetsune-semeval-2024, title = "Hidetsune at {S}em{E}val-2024 Task 10: An {E}nglish Based Approach to Emotion Recognition in {H}indi-{E}nglish code-mixed Conversations Using Machine Learning and Machine Translation", author = "Takahashi, Hidetsune", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.58", doi = "10.18653/v1/2024.semeval-1.58", pages = "374--378", abstract = "In this system paper for SemEval-2024 Task10 subtask 1 (ERC), I present my approach torecognizing emotions in Hindi-English codemixed conversations. I train a SpaCy modelwith English translated data and classify emotions behind Hindi-English code-mixed utterances by using the model and translating theminto English. I use machine translation to translate all the data in Hindi-English mixed language into English due to an easy access to existing data for emotion recognition in English.Some additional data in English are used to enhance my model. This English based approachdemonstrates a fundamental possibility and potential of simplifying code-mixed language intoone major language for emotion recognition.", }
In this system paper for SemEval-2024 Task10 subtask 1 (ERC), I present my approach torecognizing emotions in Hindi-English codemixed conversations. I train a SpaCy modelwith English translated data and classify emotions behind Hindi-English code-mixed utterances by using the model and translating theminto English. I use machine translation to translate all the data in Hindi-English mixed language into English due to an easy access to existing data for emotion recognition in English.Some additional data in English are used to enhance my model. This English based approachdemonstrates a fundamental possibility and potential of simplifying code-mixed language intoone major language for emotion recognition.
[ "Takahashi, Hidetsune" ]
Hidetsune at SemEval-2024 Task 10: An English Based Approach to Emotion Recognition in Hindi-English code-mixed Conversations Using Machine Learning and Machine Translation
semeval-1.58
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.59.bib
https://aclanthology.org/2024.semeval-1.59/
@inproceedings{siino-2024-mpnet, title = "All-Mpnet at {S}em{E}val-2024 Task 1: Application of Mpnet for Evaluating Semantic Textual Relatedness", author = "Siino, Marco", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.59", doi = "10.18653/v1/2024.semeval-1.59", pages = "379--384", abstract = "In this study, we tackle the task of automatically discerning the level of semantic relatedness between pairs of sentences. Specifically, Task 1 at SemEval-2024 involves predicting the Semantic Textual Relatedness (STR) of sentence pairs. Participants are tasked with ranking sentence pairs based on their proximity in meaning, quantified by their degree of semantic relatedness, across 14 different languages. Each sentence pair is assigned manually determined relatedness scores ranging from 0 (indicating complete lack of relation) to 1 (denoting maximum relatedness). In our submitted approach on the official test set, focusing on Task 1 (a supervised task in English and Spanish), we achieve a Spearman rank correlation coefficient of 0.808 for the English language and 0.611 for the Spanish language.", }
In this study, we tackle the task of automatically discerning the level of semantic relatedness between pairs of sentences. Specifically, Task 1 at SemEval-2024 involves predicting the Semantic Textual Relatedness (STR) of sentence pairs. Participants are tasked with ranking sentence pairs based on their proximity in meaning, quantified by their degree of semantic relatedness, across 14 different languages. Each sentence pair is assigned manually determined relatedness scores ranging from 0 (indicating complete lack of relation) to 1 (denoting maximum relatedness). In our submitted approach on the official test set, focusing on Task 1 (a supervised task in English and Spanish), we achieve a Spearman rank correlation coefficient of 0.808 for the English language and 0.611 for the Spanish language.
[ "Siino, Marco" ]
All-Mpnet at SemEval-2024 Task 1: Application of Mpnet for Evaluating Semantic Textual Relatedness
semeval-1.59
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.60.bib
https://aclanthology.org/2024.semeval-1.60/
@inproceedings{lu-kao-2024-0x-yuan, title = "0x.{Y}uan at {S}em{E}val-2024 Task 5: Enhancing Legal Argument Reasoning with Structured Prompts", author = "Lu, Yu-an and Kao, Hung-yu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.60", doi = "10.18653/v1/2024.semeval-1.60", pages = "385--390", abstract = "The intersection of legal reasoning and Natural Language Processing (NLP) technologies, particularly Large Language Models (LLMs), offers groundbreaking potential for augmenting human capabilities in the legal domain. This paper presents our approach and findings from participating in SemEval-2024 Task 5, focusing on the effect of argument reasoning in civil procedures using legal reasoning prompts. We investigated the impact of structured legal reasoning methodologies, including TREACC, IRAC, IRAAC, and MIRAC, on guiding LLMs to analyze and evaluate legal arguments systematically. Our experimental setup involved crafting specific prompts based on these methodologies to instruct the LLM to dissect and scrutinize legal cases, aiming to discern the cogency of argumentative solutions within a zero-shot learning framework. The performance of our approach, as measured by F1 score and accuracy, demonstrated the efficacy of integrating structured legal reasoning into LLMs for legal analysis. The findings underscore the promise of LLMs, when equipped with legal reasoning prompts, in enhancing their ability to process and reason through complex legal texts, thus contributing to the broader application of AI in legal studies and practice.", }
The intersection of legal reasoning and Natural Language Processing (NLP) technologies, particularly Large Language Models (LLMs), offers groundbreaking potential for augmenting human capabilities in the legal domain. This paper presents our approach and findings from participating in SemEval-2024 Task 5, focusing on the effect of argument reasoning in civil procedures using legal reasoning prompts. We investigated the impact of structured legal reasoning methodologies, including TREACC, IRAC, IRAAC, and MIRAC, on guiding LLMs to analyze and evaluate legal arguments systematically. Our experimental setup involved crafting specific prompts based on these methodologies to instruct the LLM to dissect and scrutinize legal cases, aiming to discern the cogency of argumentative solutions within a zero-shot learning framework. The performance of our approach, as measured by F1 score and accuracy, demonstrated the efficacy of integrating structured legal reasoning into LLMs for legal analysis. The findings underscore the promise of LLMs, when equipped with legal reasoning prompts, in enhancing their ability to process and reason through complex legal texts, thus contributing to the broader application of AI in legal studies and practice.
[ "Lu, Yu-an", "Kao, Hung-yu" ]
0x.Yuan at SemEval-2024 Task 5: Enhancing Legal Argument Reasoning with Structured Prompts
semeval-1.60
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.61.bib
https://aclanthology.org/2024.semeval-1.61/
@inproceedings{brekhof-etal-2024-groningen, title = "{G}roningen team {D} at {S}em{E}val-2024 Task 8: Exploring data generation and a combined model for fine-tuning {LLM}s for Multidomain Machine-Generated Text Detection", author = "Brekhof, Thijs and Liu, Xuanyi and Ruitenbeek, Joris and Top, Niels and Zhou, Yuwen", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.61", doi = "10.18653/v1/2024.semeval-1.61", pages = "391--398", abstract = "In this system description, we describe our process and the systems that we created for the subtasks A monolingual, A multilingual, and B forthe SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box MachineGenerated Text Detection. This shared task aimsat detecting and differentiating between machinegenerated text and human-written text. SubtaskA is focused on detecting if a text is machinegenerated or human-written both in a monolingualand a multilingual setting. Subtask B is also focused on detecting if a text is human-written ormachine-generated, though it takes it one step further by also requiring the detection of the correct language model used for generating the text.For the monolingual aspects of this task, our approach is centered around fine-tuning a debertav3-large LM. For the multilingual setting, we created an ensemble model utilizing different monolingual models and a language identification toolto classify each text. We also experiment with thegeneration of extra training data. Our results showthat the generation of extra data aids our modelsand leads to an increase in accuracy.", }
In this system description, we describe our process and the systems that we created for the subtasks A monolingual, A multilingual, and B forthe SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box MachineGenerated Text Detection. This shared task aimsat detecting and differentiating between machinegenerated text and human-written text. SubtaskA is focused on detecting if a text is machinegenerated or human-written both in a monolingualand a multilingual setting. Subtask B is also focused on detecting if a text is human-written ormachine-generated, though it takes it one step further by also requiring the detection of the correct language model used for generating the text.For the monolingual aspects of this task, our approach is centered around fine-tuning a debertav3-large LM. For the multilingual setting, we created an ensemble model utilizing different monolingual models and a language identification toolto classify each text. We also experiment with thegeneration of extra training data. Our results showthat the generation of extra data aids our modelsand leads to an increase in accuracy.
[ "Brekhof, Thijs", "Liu, Xuanyi", "Ruitenbeek, Joris", "Top, Niels", "Zhou, Yuwen" ]
Groningen team D at SemEval-2024 Task 8: Exploring data generation and a combined model for fine-tuning LLMs for Multidomain Machine-Generated Text Detection
semeval-1.61
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.62.bib
https://aclanthology.org/2024.semeval-1.62/
@inproceedings{cao-etal-2024-kathlalu, title = "Kathlalu at {S}em{E}val-2024 Task 8: A Comparative Analysis of Binary Classification Methods for Distinguishing Between Human and Machine-generated Text", author = "Cao, Lujia and Kilic, Ece Lara and Will, Katharina", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.62", doi = "10.18653/v1/2024.semeval-1.62", pages = "399--402", abstract = "This paper investigates two methods for constructing a binary classifier to distinguish between human-generated and machine-generated text. The main emphasis is on a straightforward approach based on Zipf{'}s law, which, despite its simplicity, achieves a moderate level of performance. Additionally, the paper briefly discusses experimentation with the utilization of unigram word counts.", }
This paper investigates two methods for constructing a binary classifier to distinguish between human-generated and machine-generated text. The main emphasis is on a straightforward approach based on Zipf{'}s law, which, despite its simplicity, achieves a moderate level of performance. Additionally, the paper briefly discusses experimentation with the utilization of unigram word counts.
[ "Cao, Lujia", "Kilic, Ece Lara", "Will, Katharina" ]
Kathlalu at SemEval-2024 Task 8: A Comparative Analysis of Binary Classification Methods for Distinguishing Between Human and Machine-generated Text
semeval-1.62
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.63.bib
https://aclanthology.org/2024.semeval-1.63/
@inproceedings{marchitan-etal-2024-team, title = "Team {U}nibuc - {NLP} at {S}em{E}val-2024 Task 8: Transformer and Hybrid Deep Learning Based Models for Machine-Generated Text Detection", author = "Marchitan, Teodor-george and Creanga, Claudiu and Dinu, Liviu P.", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.63", doi = "10.18653/v1/2024.semeval-1.63", pages = "403--411", abstract = "This paper describes the approach of the UniBuc - NLP team in tackling the SemEval 2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection. We explored transformer-based and hybrid deep learning architectures. For subtask B, our transformer-based model achieved a strong second-place out of 77 teams with an accuracy of 86.95{\%}, demonstrating the architecture{'}s suitability for this task. However, our models showed overfitting in subtask A which could potentially be fixed with less fine-tunning and increasing maximum sequence length. For subtask C (token-level classification), our hybrid model overfit during training, hindering its ability to detect transitions between human and machine-generated text.", }
This paper describes the approach of the UniBuc - NLP team in tackling the SemEval 2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection. We explored transformer-based and hybrid deep learning architectures. For subtask B, our transformer-based model achieved a strong second-place out of 77 teams with an accuracy of 86.95{\%}, demonstrating the architecture{'}s suitability for this task. However, our models showed overfitting in subtask A which could potentially be fixed with less fine-tunning and increasing maximum sequence length. For subtask C (token-level classification), our hybrid model overfit during training, hindering its ability to detect transitions between human and machine-generated text.
[ "Marchitan, Teodor-george", "Creanga, Claudiu", "Dinu, Liviu P." ]
Team Unibuc - NLP at SemEval-2024 Task 8: Transformer and Hybrid Deep Learning Based Models for Machine-Generated Text Detection
semeval-1.63
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.64.bib
https://aclanthology.org/2024.semeval-1.64/
@inproceedings{alexandru-etal-2024-linguistech, title = "{L}inguis{T}ech at {S}em{E}val-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation", author = "Alexandru, Mihaela and Ciocoiu, C{\u{a}}lina and M{\u{a}}niga, Ioana and Ungureanu, Octavian and G{\^\i}fu, Daniela and Trand{\u{a}}b{\u{a}}{\textcommabelow{t}}, Diana", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.64", doi = "10.18653/v1/2024.semeval-1.64", pages = "412--419", abstract = "The {``}Emotion Discovery and Reasoning Its Flip in Conversation{''} task at the SemEval 2024 competition focuses on the automatic recognition of emotion flips, triggered within multi-party textual conversations. This paper proposes a novel approach that draws a parallel between a mixed strategy and a comparative strategy, contrasting a Rule-Based Function with Named Entity Recognition (NER){---}an approach that shows promise in understanding speaker-specific emotional dynamics. Furthermore, this method surpasses the performance of both DistilBERT and RoBERTa models, demonstrating competitive effectiveness in detecting emotion flips triggered in multi-party textual conversations, achieving a 70{\%} F1-score. This system was ranked 6th in the SemEval 2024 competition for Subtask 3.", }
The {``}Emotion Discovery and Reasoning Its Flip in Conversation{''} task at the SemEval 2024 competition focuses on the automatic recognition of emotion flips, triggered within multi-party textual conversations. This paper proposes a novel approach that draws a parallel between a mixed strategy and a comparative strategy, contrasting a Rule-Based Function with Named Entity Recognition (NER){---}an approach that shows promise in understanding speaker-specific emotional dynamics. Furthermore, this method surpasses the performance of both DistilBERT and RoBERTa models, demonstrating competitive effectiveness in detecting emotion flips triggered in multi-party textual conversations, achieving a 70{\%} F1-score. This system was ranked 6th in the SemEval 2024 competition for Subtask 3.
[ "Alex", "ru, Mihaela", "Ciocoiu, C{\\u{a}}lina", "M{\\u{a}}niga, Ioana", "Ungureanu, Octavian", "G{\\^\\i}fu, Daniela", "Tr", "{\\u{a}}b{\\u{a}}{\\textcommabelow{t}}, Diana" ]
LinguisTech at SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation
semeval-1.64
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.65.bib
https://aclanthology.org/2024.semeval-1.65/
@inproceedings{keinan-2024-text, title = "Text Mining at {S}em{E}val-2024 Task 1: Evaluating Semantic Textual Relatedness in Low-resource Languages using Various Embedding Methods and Machine Learning Regression Models", author = "Keinan, Ron", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.65", doi = "10.18653/v1/2024.semeval-1.65", pages = "420--431", abstract = "In this paper, I describe my submission to the SemEval-2024 contest. I tackled subtask 1 - {``}Semantic Textual Relatedness for African and Asian Languages{''}. To find the semantic relatedness of sentence pairs, I tackled this task by creating models for nine different languages. I then vectorized the text data using a variety of embedding techniques including doc2vec, tf-idf, Sentence-Transformers, Bert, Roberta, and more, and used 11 traditional machine learning techniques of the regression type for analysis and evaluation.", }
In this paper, I describe my submission to the SemEval-2024 contest. I tackled subtask 1 - {``}Semantic Textual Relatedness for African and Asian Languages{''}. To find the semantic relatedness of sentence pairs, I tackled this task by creating models for nine different languages. I then vectorized the text data using a variety of embedding techniques including doc2vec, tf-idf, Sentence-Transformers, Bert, Roberta, and more, and used 11 traditional machine learning techniques of the regression type for analysis and evaluation.
[ "Keinan, Ron" ]
Text Mining at SemEval-2024 Task 1: Evaluating Semantic Textual Relatedness in Low-resource Languages using Various Embedding Methods and Machine Learning Regression Models
semeval-1.65
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.66.bib
https://aclanthology.org/2024.semeval-1.66/
@inproceedings{fahfouh-etal-2024-usmba, title = "{USMBA}-{NLP} at {S}em{E}val-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials using Bert", author = "Fahfouh, Anass and Benlahbib, Abdessamad and Riffi, Jamal and Tairi, Hamid", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.66", doi = "10.18653/v1/2024.semeval-1.66", pages = "432--436", abstract = "This paper presents the application of BERT inSemEval 2024 Task 2, Safe Biomedical Natu-ral Language Inference for Clinical Trials. Themain objectives of this task were: First, to in-vestigate the consistency of BERT in its rep-resentation of semantic phenomena necessaryfor complex inference in clinical NLI settings.Second, to investigate the ability of BERT toperform faithful reasoning, i.e., make correctpredictions for the correct reasons. The submit-ted model is fine-tuned on the NLI4CT dataset,which is enhanced with a novel contrast set,using binary cross entropy loss.", }
This paper presents the application of BERT inSemEval 2024 Task 2, Safe Biomedical Natu-ral Language Inference for Clinical Trials. Themain objectives of this task were: First, to in-vestigate the consistency of BERT in its rep-resentation of semantic phenomena necessaryfor complex inference in clinical NLI settings.Second, to investigate the ability of BERT toperform faithful reasoning, i.e., make correctpredictions for the correct reasons. The submit-ted model is fine-tuned on the NLI4CT dataset,which is enhanced with a novel contrast set,using binary cross entropy loss.
[ "Fahfouh, Anass", "Benlahbib, Abdessamad", "Riffi, Jamal", "Tairi, Hamid" ]
USMBA-NLP at SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials using Bert
semeval-1.66
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.67.bib
https://aclanthology.org/2024.semeval-1.67/
@inproceedings{brutti-mairesse-verlingue-2024-crcl, title = "{CRCL} at {S}em{E}val-2024 Task 2: Simple prompt optimizations", author = "Brutti-mairesse, Clement and Verlingue, Loic", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.67", doi = "10.18653/v1/2024.semeval-1.67", pages = "437--442", abstract = "We present a baseline for the SemEval 2024 task 2 challenge, whose objective is to ascertain the inference relationship between pairs of clinical trial report sections and statements.We apply prompt optimization techniques with LLM Instruct models provided as a Language Model-as-a-Service (LMaaS).We observed, in line with recent findings, that synthetic CoT prompts significantly enhance manually crafted ones.The source code is available at this GitHub repository https://github.com/ClementBM-CLB/semeval-2024", }
We present a baseline for the SemEval 2024 task 2 challenge, whose objective is to ascertain the inference relationship between pairs of clinical trial report sections and statements.We apply prompt optimization techniques with LLM Instruct models provided as a Language Model-as-a-Service (LMaaS).We observed, in line with recent findings, that synthetic CoT prompts significantly enhance manually crafted ones.The source code is available at this GitHub repository https://github.com/ClementBM-CLB/semeval-2024
[ "Brutti-mairesse, Clement", "Verlingue, Loic" ]
CRCL at SemEval-2024 Task 2: Simple prompt optimizations
semeval-1.67
Poster
2405.01942
[ "https://github.com/clementbm-clb/semeval-2024" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.68.bib
https://aclanthology.org/2024.semeval-1.68/
@inproceedings{anghelina-etal-2024-sutealbastre, title = "{S}ute{A}lbastre at {S}em{E}val-2024 Task 4: Predicting Propaganda Techniques in Multilingual Memes using Joint Text and Vision Transformers", author = "Anghelina, Ion and Bu{\textcommabelow{t}}{\u{a}}, Gabriel and Enache, Alexandru", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.68", doi = "10.18653/v1/2024.semeval-1.68", pages = "443--449", abstract = "The main goal of this year{'}s SemEval Task 4 isdetecting the presence of persuasion techniquesin various meme formats. While Subtask 1targets text-only posts, Subtask 2, subsectionsa and b tackle posts containing both imagesand captions. The first 2 subtasks consist ofmulti-class and multi-label classifications, inthe context of a hierarchical taxonomy of 22different persuasion techniques.This paper proposes a solution for persuasiondetection in both these scenarios and for vari-ous languages of the caption text. Our team{'}smain approach consists of a Multimodal Learn-ing Neural Network architecture, having Tex-tual and Vision Transformers as its backbone.The models that we have experimented with in-clude EfficientNet and ViT as visual encodersand BERT and GPT2 as textual encoders.", }
The main goal of this year{'}s SemEval Task 4 isdetecting the presence of persuasion techniquesin various meme formats. While Subtask 1targets text-only posts, Subtask 2, subsectionsa and b tackle posts containing both imagesand captions. The first 2 subtasks consist ofmulti-class and multi-label classifications, inthe context of a hierarchical taxonomy of 22different persuasion techniques.This paper proposes a solution for persuasiondetection in both these scenarios and for vari-ous languages of the caption text. Our team{'}smain approach consists of a Multimodal Learn-ing Neural Network architecture, having Tex-tual and Vision Transformers as its backbone.The models that we have experimented with in-clude EfficientNet and ViT as visual encodersand BERT and GPT2 as textual encoders.
[ "Anghelina, Ion", "Bu{\\textcommabelow{t}}{\\u{a}}, Gabriel", "Enache, Alex", "ru" ]
SuteAlbastre at SemEval-2024 Task 4: Predicting Propaganda Techniques in Multilingual Memes using Joint Text and Vision Transformers
semeval-1.68
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.69.bib
https://aclanthology.org/2024.semeval-1.69/
@inproceedings{heydari-rad-etal-2024-rfbes, title = "{RFBES} at {S}em{E}val-2024 Task 8: Investigating Syntactic and Semantic Features for Distinguishing {AI}-Generated and Human-Written Texts", author = "Heydari Rad, Mohammad and Farsi, Farhan and Bali, Shayan and Etezadi, Romina and Shamsfard, Mehrnoush", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.69", doi = "10.18653/v1/2024.semeval-1.69", pages = "450--454", abstract = "Nowadays, the usage of Large Language Models (LLMs) has increased, and LLMs have been used to generate texts in different languages and for different tasks. Additionally, due to the participation of remarkable companies such as Google and OpenAI, LLMs are now more accessible, and people can easily use them. However, an important issue is how we can detect AI-generated texts from human-written ones. In this article, we have investigated the problem of AI-generated text detection from two different aspects: semantics and syntax. Finally, we presented an AI model that can distinguish AI-generated texts from human-written ones with high accuracy on both multilingual and monolingual tasks using the M4 dataset. According to our results, using a semantic approach would be more helpful for detection. However, there is a lot of room for improvement in the syntactic approach, and it would be a good approach for future work.", }
Nowadays, the usage of Large Language Models (LLMs) has increased, and LLMs have been used to generate texts in different languages and for different tasks. Additionally, due to the participation of remarkable companies such as Google and OpenAI, LLMs are now more accessible, and people can easily use them. However, an important issue is how we can detect AI-generated texts from human-written ones. In this article, we have investigated the problem of AI-generated text detection from two different aspects: semantics and syntax. Finally, we presented an AI model that can distinguish AI-generated texts from human-written ones with high accuracy on both multilingual and monolingual tasks using the M4 dataset. According to our results, using a semantic approach would be more helpful for detection. However, there is a lot of room for improvement in the syntactic approach, and it would be a good approach for future work.
[ "Heydari Rad, Mohammad", "Farsi, Farhan", "Bali, Shayan", "Etezadi, Romina", "Shamsfard, Mehrnoush" ]
RFBES at SemEval-2024 Task 8: Investigating Syntactic and Semantic Features for Distinguishing AI-Generated and Human-Written Texts
semeval-1.69
Poster
2402.14838
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.70.bib
https://aclanthology.org/2024.semeval-1.70/
@inproceedings{vasconcelos-etal-2024-bambas, title = "{BAMBAS} at {S}em{E}val-2024 Task 4: How far can we get without looking at hierarchies?", author = "Vasconcelos, Arthur and De Melo, Luiz Felipe and Goncalves, Eduardo and Bezerra, Eduardo and Paes, Aline and Plastino, Alexandre", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.70", doi = "10.18653/v1/2024.semeval-1.70", pages = "455--462", abstract = "This paper describes the BAMBAS team{'}s participation in SemEval-2024 Task 4 Subtask 1, which focused on the multilabel classification of persuasion techniques in the textual content of Internet memes. We explored a lightweight approach that does not consider the hierarchy of labels. First, we get the text embeddings leveraging the multilingual tweets-based language model, Bernice. Next, we use those embeddings to train a separate binary classifier for each label, adopting independent oversampling strategies in each model in a binary-relevance style. We tested our approach over the English dataset, exceeding the baseline by 21 percentage points, while ranking in 23th in terms of hierarchical F1 and 11st in terms of hierarchical recall.", }
This paper describes the BAMBAS team{'}s participation in SemEval-2024 Task 4 Subtask 1, which focused on the multilabel classification of persuasion techniques in the textual content of Internet memes. We explored a lightweight approach that does not consider the hierarchy of labels. First, we get the text embeddings leveraging the multilingual tweets-based language model, Bernice. Next, we use those embeddings to train a separate binary classifier for each label, adopting independent oversampling strategies in each model in a binary-relevance style. We tested our approach over the English dataset, exceeding the baseline by 21 percentage points, while ranking in 23th in terms of hierarchical F1 and 11st in terms of hierarchical recall.
[ "Vasconcelos, Arthur", "De Melo, Luiz Felipe", "Goncalves, Eduardo", "Bezerra, Eduardo", "Paes, Aline", "Plastino, Alex", "re" ]
BAMBAS at SemEval-2024 Task 4: How far can we get without looking at hierarchies?
semeval-1.70
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.71.bib
https://aclanthology.org/2024.semeval-1.71/
@inproceedings{xu-etal-2024-team, title = "Team {QUST} at {S}em{E}val-2024 Task 8: A Comprehensive Study of Monolingual and Multilingual Approaches for Detecting {AI}-generated Text", author = "Xu, Xiaoman and Li, Xiangrun and Wang, Taihang and Tian, Jianxiang and Jiang, Ye", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.71", doi = "10.18653/v1/2024.semeval-1.71", pages = "463--470", abstract = "This paper presents the participation of team QUST in Task 8 SemEval 2024. we first performed data augmentation and cleaning on the dataset to enhance model training efficiency and accuracy. In the monolingual task, we evaluated traditional deep-learning methods, multiscale positive-unlabeled framework (MPU), fine-tuning, adapters and ensemble methods. Then, we selected the top-performing models based on their accuracy from the monolingual models and evaluated them in subtasks A and B. The final model construction employed a stacking ensemble that combined fine-tuning with MPU. Our system achieved 6th (scored 6th in terms of accuracy, officially ranked 13th in order) place in the official test set in multilingual settings of subtask A. We release our system code at:https://github.com/warmth27/SemEval2024{\_}QUST", }
This paper presents the participation of team QUST in Task 8 SemEval 2024. we first performed data augmentation and cleaning on the dataset to enhance model training efficiency and accuracy. In the monolingual task, we evaluated traditional deep-learning methods, multiscale positive-unlabeled framework (MPU), fine-tuning, adapters and ensemble methods. Then, we selected the top-performing models based on their accuracy from the monolingual models and evaluated them in subtasks A and B. The final model construction employed a stacking ensemble that combined fine-tuning with MPU. Our system achieved 6th (scored 6th in terms of accuracy, officially ranked 13th in order) place in the official test set in multilingual settings of subtask A. We release our system code at:https://github.com/warmth27/SemEval2024{\_}QUST
[ "Xu, Xiaoman", "Li, Xiangrun", "Wang, Taihang", "Tian, Jianxiang", "Jiang, Ye" ]
Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual and Multilingual Approaches for Detecting AI-generated Text
semeval-1.71
Poster
2402.11934
[ "https://github.com/warmth27/semeval2024_qust" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.72.bib
https://aclanthology.org/2024.semeval-1.72/
@inproceedings{wang-etal-2024-ynu, title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task 9: Using Pre-trained Language Models with {L}o{RA} for Multiple-choice Answering Tasks", author = "Wang, Jie and Wang, Jin and Zhang, Xuejie", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.72", doi = "10.18653/v1/2024.semeval-1.72", pages = "471--476", abstract = "This study describes the model built in Task 9: brainteaser in the SemEval-2024 competition, which is a multiple-choice task. As active participants in Task 9, our system strategically employs the decoding-enhanced BERT (DeBERTa) architecture enriched with disentangled attention mechanisms. Additionally, we fine-tuned our model using low-rank adaptation (LoRA) to optimize its performance further. Moreover, we integrate focal loss into our framework to address label imbalance issues. The systematic integration of these techniques has resulted in outstanding performance metrics. Upon evaluation using the provided test dataset, our system showcases commendable results, with a remarkable accuracy score of 0.9 for subtask 1, positioning us fifth among all participants. Similarly, for subtask 2, our system exhibits a substantial accuracy rate of 0.781, securing a commendable seventh-place ranking. The code for this paper is published at: https://github.com/123yunnandaxue/Semveal-2024{\_}task9.", }
This study describes the model built in Task 9: brainteaser in the SemEval-2024 competition, which is a multiple-choice task. As active participants in Task 9, our system strategically employs the decoding-enhanced BERT (DeBERTa) architecture enriched with disentangled attention mechanisms. Additionally, we fine-tuned our model using low-rank adaptation (LoRA) to optimize its performance further. Moreover, we integrate focal loss into our framework to address label imbalance issues. The systematic integration of these techniques has resulted in outstanding performance metrics. Upon evaluation using the provided test dataset, our system showcases commendable results, with a remarkable accuracy score of 0.9 for subtask 1, positioning us fifth among all participants. Similarly, for subtask 2, our system exhibits a substantial accuracy rate of 0.781, securing a commendable seventh-place ranking. The code for this paper is published at: https://github.com/123yunnandaxue/Semveal-2024{\_}task9.
[ "Wang, Jie", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at SemEval-2024 Task 9: Using Pre-trained Language Models with LoRA for Multiple-choice Answering Tasks
semeval-1.72
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.73.bib
https://aclanthology.org/2024.semeval-1.73/
@inproceedings{larson-tyers-2024-team, title = "Team jelarson at {S}em{E}val 2024 Task 8: Predicting Boundary Line Between Human and Machine Generated Text", author = "Larson, Joseph and Tyers, Francis", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.73", doi = "10.18653/v1/2024.semeval-1.73", pages = "477--484", abstract = "In this paper, we handle the task of building a system that, given a document written first by a human and then finished by an LLM, the system must determine the transition word i.e. where the machine begins to write. We built a system by examining the data for textual anomalies and combining a method of heuristic approaches with a linear regression model based on the text length of each document.", }
In this paper, we handle the task of building a system that, given a document written first by a human and then finished by an LLM, the system must determine the transition word i.e. where the machine begins to write. We built a system by examining the data for textual anomalies and combining a method of heuristic approaches with a linear regression model based on the text length of each document.
[ "Larson, Joseph", "Tyers, Francis" ]
Team jelarson at SemEval 2024 Task 8: Predicting Boundary Line Between Human and Machine Generated Text
semeval-1.73
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.74.bib
https://aclanthology.org/2024.semeval-1.74/
@inproceedings{roy-dipta-shahriar-2024-hu, title = "{HU} at {S}em{E}val-2024 Task 8{A}: Can Contrastive Learning Learn Embeddings to Detect Machine-Generated Text?", author = "Roy Dipta, Shubhashis and Shahriar, Sadat", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.74", doi = "10.18653/v1/2024.semeval-1.74", pages = "485--491", abstract = "This paper describes our system developed for SemEval-2024 Task 8, {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection.{''} Machine-generated texts have been one of the main concerns due to the use of large language models (LLM) in fake text generation, phishing, cheating in exams, or even plagiarizing copyright materials. A lot of systems have been developed to detect machine-generated text. Nonetheless, the majority of these systems rely on the text-generating model. This limitation is impractical in real-world scenarios, as it{'}s often impossible to know which specific model the user has used for text generation. In this work, we propose a single model based on contrastive learning, which uses {\textasciitilde}40{\%} of the baseline{'}s parameters (149M vs. 355M) but shows a comparable performance on the test dataset (21st out of 137 participants). Our key finding is that even without an ensemble of multiple models, a single base model can have comparable performance with the help of data augmentation and contrastive learning.", }
This paper describes our system developed for SemEval-2024 Task 8, {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection.{''} Machine-generated texts have been one of the main concerns due to the use of large language models (LLM) in fake text generation, phishing, cheating in exams, or even plagiarizing copyright materials. A lot of systems have been developed to detect machine-generated text. Nonetheless, the majority of these systems rely on the text-generating model. This limitation is impractical in real-world scenarios, as it{'}s often impossible to know which specific model the user has used for text generation. In this work, we propose a single model based on contrastive learning, which uses {\textasciitilde}40{\%} of the baseline{'}s parameters (149M vs. 355M) but shows a comparable performance on the test dataset (21st out of 137 participants). Our key finding is that even without an ensemble of multiple models, a single base model can have comparable performance with the help of data augmentation and contrastive learning.
[ "Roy Dipta, Shubhashis", "Shahriar, Sadat" ]
HU at SemEval-2024 Task 8A: Can Contrastive Learning Learn Embeddings to Detect Machine-Generated Text?
semeval-1.74
Poster
2402.11815
[ "https://github.com/dipta007/semeval24-task8" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.75.bib
https://aclanthology.org/2024.semeval-1.75/
@inproceedings{wei-2024-team, title = "Team {AT} at {S}em{E}val-2024 Task 8: Machine-Generated Text Detection with Semantic Embeddings", author = "Wei, Yuchen", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.75", doi = "10.18653/v1/2024.semeval-1.75", pages = "492--496", abstract = "This study investigates the detection of machine-generated text using several semantic embedding techniques, a critical issue in the era of advanced language models. Different methodologies were examined: GloVe embeddings, N-gram embedding models, Sentence BERT, and a concatenated embedding approach, against a fine-tuned RoBERTa baseline. The research was conducted within the framework of SemEval-2024 Task 8, encompassing tasks for binary and multi-class classification of machine-generated text.", }
This study investigates the detection of machine-generated text using several semantic embedding techniques, a critical issue in the era of advanced language models. Different methodologies were examined: GloVe embeddings, N-gram embedding models, Sentence BERT, and a concatenated embedding approach, against a fine-tuned RoBERTa baseline. The research was conducted within the framework of SemEval-2024 Task 8, encompassing tasks for binary and multi-class classification of machine-generated text.
[ "Wei, Yuchen" ]
Team AT at SemEval-2024 Task 8: Machine-Generated Text Detection with Semantic Embeddings
semeval-1.75
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.76.bib
https://aclanthology.org/2024.semeval-1.76/
@inproceedings{liu-etal-2024-jn666, title = "{JN}666 at {S}em{E}val-2024 Task 7: {N}um{E}val: Numeral-Aware Language Understanding and Generation", author = "Liu, Xinyi and Liu, Xintong and Lu, Hengyang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.76", doi = "10.18653/v1/2024.semeval-1.76", pages = "497--502", abstract = "This paper is submitted for SemEval-2027 task 7: Enhancing the Model{'}s Understanding and Generation of Numerical Values. The dataset for this task is NQuAD, which requires us to select the most suitable option number from four numerical options to fill in the blank in a news article based on the context. Based on the BertForMultipleChoice model, we proposed two new models, MC BERT and SSC BERT, and improved the model{'}s numerical understanding ability by pre-training the model on numerical comparison tasks. Ultimately, our best-performing model achieved an accuracy rate of 79.40{\%}, which is 9.45{\%} higher than the accuracy rate of NEMo.", }
This paper is submitted for SemEval-2027 task 7: Enhancing the Model{'}s Understanding and Generation of Numerical Values. The dataset for this task is NQuAD, which requires us to select the most suitable option number from four numerical options to fill in the blank in a news article based on the context. Based on the BertForMultipleChoice model, we proposed two new models, MC BERT and SSC BERT, and improved the model{'}s numerical understanding ability by pre-training the model on numerical comparison tasks. Ultimately, our best-performing model achieved an accuracy rate of 79.40{\%}, which is 9.45{\%} higher than the accuracy rate of NEMo.
[ "Liu, Xinyi", "Liu, Xintong", "Lu, Hengyang" ]
JN666 at SemEval-2024 Task 7: NumEval: Numeral-Aware Language Understanding and Generation
semeval-1.76
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.77.bib
https://aclanthology.org/2024.semeval-1.77/
@inproceedings{mahmoud-nakov-2024-bertastic, title = "{BERT}astic at {S}em{E}val-2024 Task 4: State-of-the-Art Multilingual Propaganda Detection in Memes via Zero-Shot Learning with Vision-Language Models", author = "Mahmoud, Tarek and Nakov, Preslav", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.77", doi = "10.18653/v1/2024.semeval-1.77", pages = "503--510", abstract = "Analyzing propagandistic memes in a multilingual, multimodal dataset is a challenging problem due to the inherent complexity of memes{'} multimodal content, which combines images, text, and often, nuanced context. In this paper, we use a VLM in a zero-shot approach to detect propagandistic memes and achieve a state-of-the-art average macro F1 of 66.7{\%} over all languages. Notably, we outperform other systems on North Macedonian memes, and obtain competitive results on Bulgarian and Arabic memes. We also present our early fusion approach for identifying persuasion techniques in memes in a hierarchical multilabel classification setting. This approach outperforms all other approaches in average hierarchical precision with an average score of 77.66{\%}. The systems presented contribute to the evolving field of research on the detection of persuasion techniques in multimodal datasets by offering insights that could be of use in the development of more effective tools for combating online propaganda.", }
Analyzing propagandistic memes in a multilingual, multimodal dataset is a challenging problem due to the inherent complexity of memes{'} multimodal content, which combines images, text, and often, nuanced context. In this paper, we use a VLM in a zero-shot approach to detect propagandistic memes and achieve a state-of-the-art average macro F1 of 66.7{\%} over all languages. Notably, we outperform other systems on North Macedonian memes, and obtain competitive results on Bulgarian and Arabic memes. We also present our early fusion approach for identifying persuasion techniques in memes in a hierarchical multilabel classification setting. This approach outperforms all other approaches in average hierarchical precision with an average score of 77.66{\%}. The systems presented contribute to the evolving field of research on the detection of persuasion techniques in multimodal datasets by offering insights that could be of use in the development of more effective tools for combating online propaganda.
[ "Mahmoud, Tarek", "Nakov, Preslav" ]
BERTastic at SemEval-2024 Task 4: State-of-the-Art Multilingual Propaganda Detection in Memes via Zero-Shot Learning with Vision-Language Models
semeval-1.77
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.78.bib
https://aclanthology.org/2024.semeval-1.78/
@inproceedings{kadiyala-2024-rkadiyala, title = "{RK}adiyala at {S}em{E}val-2024 Task 8: Black-Box Word-Level Text Boundary Detection in Partially Machine Generated Texts", author = "Kadiyala, Ram Mohan Rao", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.78", doi = "10.18653/v1/2024.semeval-1.78", pages = "511--519", abstract = "With increasing usage of generative models for text generation and widespread use of machine generated texts in various domains, being able to distinguish between human written and machine generated texts is a significant challenge. While existing models and proprietary systems focus on identifying whether given text is entirely human written or entirely machine generated, only a few systems provide insights at sentence or paragraph level at likelihood of being machine generated at a non reliable accuracy level, working well only for a set of domains and generators. This paper introduces few reliable approaches for the novel task of identifying which part of a given text is machine generated at a word level while comparing results from different approaches and methods. We present a comparison with proprietary systems , performance of our model on unseen domains{'} and generators{'} texts. The findings reveal significant improvements in detection accuracy along with comparison on other aspects of detection capabilities. Finally we discuss potential avenues for improvement and implications of our work. The proposed model is also well suited for detecting which parts of a text are machine generated in outputs of Instruct variants of many LLMs.", }
With increasing usage of generative models for text generation and widespread use of machine generated texts in various domains, being able to distinguish between human written and machine generated texts is a significant challenge. While existing models and proprietary systems focus on identifying whether given text is entirely human written or entirely machine generated, only a few systems provide insights at sentence or paragraph level at likelihood of being machine generated at a non reliable accuracy level, working well only for a set of domains and generators. This paper introduces few reliable approaches for the novel task of identifying which part of a given text is machine generated at a word level while comparing results from different approaches and methods. We present a comparison with proprietary systems , performance of our model on unseen domains{'} and generators{'} texts. The findings reveal significant improvements in detection accuracy along with comparison on other aspects of detection capabilities. Finally we discuss potential avenues for improvement and implications of our work. The proposed model is also well suited for detecting which parts of a text are machine generated in outputs of Instruct variants of many LLMs.
[ "Kadiyala, Ram Mohan Rao" ]
RKadiyala at SemEval-2024 Task 8: Black-Box Word-Level Text Boundary Detection in Partially Machine Generated Texts
semeval-1.78
Poster
2410.16659
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.79.bib
https://aclanthology.org/2024.semeval-1.79/
@inproceedings{das-etal-2024-tldr, title = "{TLDR} at {S}em{E}val-2024 Task 2: T5-generated clinical-Language summaries for {D}e{BERT}a Report Analysis", author = "Das, Spandan and Samuel, Vinay and Noroozizadeh, Shahriar", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.79", doi = "10.18653/v1/2024.semeval-1.79", pages = "520--529", abstract = "This paper introduces novel methodologies for the Natural Language Inference for Clinical Trials (NLI4CT) task. We present TLDR (T5-generated clinical-Language summaries for DeBERTa Report Analysis) which incorporates T5-model generated premise summaries for improved entailment and contradiction analysis in clinical NLI tasks. This approach overcomes the challenges posed by small context windows and lengthy premises, leading to a substantial improvement in Macro F1 scores: a 0.184 increase over truncated premises. Our comprehensive experimental evaluation, including detailed error analysis and ablations, confirms the superiority of TLDR in achieving consistency and faithfulness in predictions against semantically altered inputs.", }
This paper introduces novel methodologies for the Natural Language Inference for Clinical Trials (NLI4CT) task. We present TLDR (T5-generated clinical-Language summaries for DeBERTa Report Analysis) which incorporates T5-model generated premise summaries for improved entailment and contradiction analysis in clinical NLI tasks. This approach overcomes the challenges posed by small context windows and lengthy premises, leading to a substantial improvement in Macro F1 scores: a 0.184 increase over truncated premises. Our comprehensive experimental evaluation, including detailed error analysis and ablations, confirms the superiority of TLDR in achieving consistency and faithfulness in predictions against semantically altered inputs.
[ "Das, Sp", "an", "Samuel, Vinay", "Noroozizadeh, Shahriar" ]
TLDR at SemEval-2024 Task 2: T5-generated clinical-Language summaries for DeBERTa Report Analysis
semeval-1.79
Poster
2404.09136
[ "https://github.com/shahriarnz14/tldr-t5-generated-clinical-language-for-deberta-report-analysis" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.80.bib
https://aclanthology.org/2024.semeval-1.80/
@inproceedings{sun-zhou-2024-ignore, title = "ignore at {S}em{E}val-2024 Task 5: A Legal Classification Model with Summary Generation and Contrastive Learning", author = "Sun, Binjie and Zhou, Xiaobing", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.80", doi = "10.18653/v1/2024.semeval-1.80", pages = "530--535", abstract = "This paper describes our work for SemEval-2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. After analyzing the task requirements and the training dataset, we used data augmentation, adopted the large model GPT for summary generation, and added supervised contrastive learning to the basic BERT model. Our system achieved an F1 score of 0.551, ranking 14th in the competition leaderboard. Our system achieves an F1 score improvement of 0.1241 over the official baseline model.", }
This paper describes our work for SemEval-2024 Task 5: The Legal Argument Reasoning Task in Civil Procedure. After analyzing the task requirements and the training dataset, we used data augmentation, adopted the large model GPT for summary generation, and added supervised contrastive learning to the basic BERT model. Our system achieved an F1 score of 0.551, ranking 14th in the competition leaderboard. Our system achieves an F1 score improvement of 0.1241 over the official baseline model.
[ "Sun, Binjie", "Zhou, Xiaobing" ]
ignore at SemEval-2024 Task 5: A Legal Classification Model with Summary Generation and Contrastive Learning
semeval-1.80
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.81.bib
https://aclanthology.org/2024.semeval-1.81/
@inproceedings{zhang-etal-2024-samsung, title = "{S}amsung Research {C}hina-{B}eijing at {S}em{E}val-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations", author = "Zhang, Shen and Zhang, Haojie and Zhang, Jing and Zhang, Xudong and Zhuang, Yimeng and Wu, Jinting", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.81", doi = "10.18653/v1/2024.semeval-1.81", pages = "536--546", abstract = "In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions. unraveling the causes of emotions is more challenging. A new task named Multimodal Emotion-Cause Pair Extraction in Conversations is responsible for recognizing emotion and identifying causal expressions. In this study, we propose a multi-stage framework to generate emotion and extract the emotion causal pairs given the target emotion. In the first stage, LLaMA2-based InstructERC is utilized to extract the emotion category of each utterance in a conversation. After emotion recognition, a two-stream attention model is employed to extract the emotion causal pairs given the target emotion for subtask 2 while MuTEC is employed to extract causal span for subtask 1. Our approach achieved first place for both of the two subtasks in the competition.", }
In human-computer interaction, it is crucial for agents to respond to human by understanding their emotions. unraveling the causes of emotions is more challenging. A new task named Multimodal Emotion-Cause Pair Extraction in Conversations is responsible for recognizing emotion and identifying causal expressions. In this study, we propose a multi-stage framework to generate emotion and extract the emotion causal pairs given the target emotion. In the first stage, LLaMA2-based InstructERC is utilized to extract the emotion category of each utterance in a conversation. After emotion recognition, a two-stream attention model is employed to extract the emotion causal pairs given the target emotion for subtask 2 while MuTEC is employed to extract causal span for subtask 1. Our approach achieved first place for both of the two subtasks in the competition.
[ "Zhang, Shen", "Zhang, Haojie", "Zhang, Jing", "Zhang, Xudong", "Zhuang, Yimeng", "Wu, Jinting" ]
Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
semeval-1.81
Poster
2404.16905
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.82.bib
https://aclanthology.org/2024.semeval-1.82/
@inproceedings{wu-etal-2024-werkzeug, title = "Werkzeug at {S}em{E}val-2024 Task 8: {LLM}-Generated Text Detection via Gated Mixture-of-Experts Fine-Tuning", author = "Wu, Youlin and Wang, Kaichun and Ma, Kai and Yang, Liang and Lin, Hongfei", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.82", doi = "10.18653/v1/2024.semeval-1.82", pages = "547--552", abstract = "Recent advancements in Large Language Models (LLMs) have propelled text generation to unprecedented heights, approaching human-level quality. However, it poses a new challenge to distinguish LLM-generated text from human-written text. Presently, most methods address this issue through classification, achieved by fine-tuning on small language models. Unfortunately, small language models suffer from anisotropy issue, where encoded text embeddings become difficult to differentiate in the latent space. Moreover, LLMs possess the ability to alter language styles with versatility, further complicating the classification task. To tackle these challenges, we propose Gated Mixture-of-Experts Fine-tuning (GMoEF) to detect LLM-generated text. GMoEF leverages parametric whitening to normalize text embeddings, thereby mitigating the anisotropy problem. Additionally, GMoEF employs the mixture-of-experts framework equipped with gating router to capture features of LLM-generated text from multiple perspectives. Our GMoEF achieved an impressive ranking of {\#}8 out of 70 teams. The source code is available on https://gitlab.com/sigrs/gmoef.", }
Recent advancements in Large Language Models (LLMs) have propelled text generation to unprecedented heights, approaching human-level quality. However, it poses a new challenge to distinguish LLM-generated text from human-written text. Presently, most methods address this issue through classification, achieved by fine-tuning on small language models. Unfortunately, small language models suffer from anisotropy issue, where encoded text embeddings become difficult to differentiate in the latent space. Moreover, LLMs possess the ability to alter language styles with versatility, further complicating the classification task. To tackle these challenges, we propose Gated Mixture-of-Experts Fine-tuning (GMoEF) to detect LLM-generated text. GMoEF leverages parametric whitening to normalize text embeddings, thereby mitigating the anisotropy problem. Additionally, GMoEF employs the mixture-of-experts framework equipped with gating router to capture features of LLM-generated text from multiple perspectives. Our GMoEF achieved an impressive ranking of {\#}8 out of 70 teams. The source code is available on https://gitlab.com/sigrs/gmoef.
[ "Wu, Youlin", "Wang, Kaichun", "Ma, Kai", "Yang, Liang", "Lin, Hongfei" ]
Werkzeug at SemEval-2024 Task 8: LLM-Generated Text Detection via Gated Mixture-of-Experts Fine-Tuning
semeval-1.82
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.83.bib
https://aclanthology.org/2024.semeval-1.83/
@inproceedings{rajesh-etal-2024-ssn, title = "{SSN}{\_}{S}emeval10 at {S}em{E}val-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations", author = "Rajesh, Antony and Abirami, Supriya and Chandrabose, Aravindan and Kumar, Senthil", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.83", doi = "10.18653/v1/2024.semeval-1.83", pages = "553--557", abstract = "This paper presents a transformer-based model for recognizing emotions in Hindi-English code-mixed conversations, adhering to the SemEval task constraints. Leveraging BERT-based transformers, we fine-tune pre-trained models on the dataset, incorporating tokenization and attention mechanisms. Our approach achieves competitive performance (weighted F1-score of 0.4), showcasing the effectiveness of BERT in nuanced emotion analysis tasks within code-mixed conversational contexts.", }
This paper presents a transformer-based model for recognizing emotions in Hindi-English code-mixed conversations, adhering to the SemEval task constraints. Leveraging BERT-based transformers, we fine-tune pre-trained models on the dataset, incorporating tokenization and attention mechanisms. Our approach achieves competitive performance (weighted F1-score of 0.4), showcasing the effectiveness of BERT in nuanced emotion analysis tasks within code-mixed conversational contexts.
[ "Rajesh, Antony", "Abirami, Supriya", "Ch", "rabose, Aravindan", "Kumar, Senthil" ]
SSN_Semeval10 at SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations
semeval-1.83
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.84.bib
https://aclanthology.org/2024.semeval-1.84/
@inproceedings{spiegel-macko-2024-kinit, title = "{KI}n{IT} at {S}em{E}val-2024 Task 8: Fine-tuned {LLM}s for Multilingual Machine-Generated Text Detection", author = "Spiegel, Michal and Macko, Dominik", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.84", doi = "10.18653/v1/2024.semeval-1.84", pages = "558--564", abstract = "SemEval-2024 Task 8 is focused on multigenerator, multidomain, and multilingual black-box machine-generated text detection. Such a detection is important for preventing a potential misuse of large language models (LLMs), the newest of which are very capable in generating multilingual human-like texts. We have coped with this task in multiple ways, utilizing language identification and parameter-efficient fine-tuning of smaller LLMs for text classification. We have further used the per-language classification-threshold calibration to uniquely combine fine-tuned models predictions with statistical detection metrics to improve generalization of the system detection performance. Our submitted method achieved competitive results, ranking at the fourth place, just under 1 percentage point behind the winner.", }
SemEval-2024 Task 8 is focused on multigenerator, multidomain, and multilingual black-box machine-generated text detection. Such a detection is important for preventing a potential misuse of large language models (LLMs), the newest of which are very capable in generating multilingual human-like texts. We have coped with this task in multiple ways, utilizing language identification and parameter-efficient fine-tuning of smaller LLMs for text classification. We have further used the per-language classification-threshold calibration to uniquely combine fine-tuned models predictions with statistical detection metrics to improve generalization of the system detection performance. Our submitted method achieved competitive results, ranking at the fourth place, just under 1 percentage point behind the winner.
[ "Spiegel, Michal", "Macko, Dominik" ]
KInIT at SemEval-2024 Task 8: Fine-tuned LLMs for Multilingual Machine-Generated Text Detection
semeval-1.84
Poster
2402.13671
[ "https://github.com/kinit-sk/semeval-2024-task-8-machine-text-detection" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.85.bib
https://aclanthology.org/2024.semeval-1.85/
@inproceedings{ebrahimi-etal-2024-sharif, title = "Sharif-{MGTD} at {S}em{E}val-2024 Task 8: A Transformer-Based Approach to Detect Machine Generated Text", author = "Ebrahimi, Seyedeh Fatemeh and Akhavan Azari, Karim and Iravani, Amirmasoud and Qazvini, Arian and Sadeghi, Pouya and Taghavi, Zeinab and Sameti, Hossein", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.85", doi = "10.18653/v1/2024.semeval-1.85", pages = "565--572", abstract = "In this paper, we delve into the realm of detecting machine-generated text (MGT) within Natural Language Processing (NLP). Our approach involves fine-tuning a RoBERTa-base Transformer, a robust neural architecture, to tackle MGT detection as a binary classification task. Specifically focusing on Subtask A (Monolingual - English) within the SemEval-2024 competition framework, our system achieves a 78.9{\%} accuracy on the test dataset, placing us 57th among participants. While our system demonstrates proficiency in identifying human-written texts, it faces challenges in accurately discerning MGTs.", }
In this paper, we delve into the realm of detecting machine-generated text (MGT) within Natural Language Processing (NLP). Our approach involves fine-tuning a RoBERTa-base Transformer, a robust neural architecture, to tackle MGT detection as a binary classification task. Specifically focusing on Subtask A (Monolingual - English) within the SemEval-2024 competition framework, our system achieves a 78.9{\%} accuracy on the test dataset, placing us 57th among participants. While our system demonstrates proficiency in identifying human-written texts, it faces challenges in accurately discerning MGTs.
[ "Ebrahimi, Seyedeh Fatemeh", "Akhavan Azari, Karim", "Iravani, Amirmasoud", "Qazvini, Arian", "Sadeghi, Pouya", "Taghavi, Zeinab", "Sameti, Hossein" ]
Sharif-MGTD at SemEval-2024 Task 8: A Transformer-Based Approach to Detect Machine Generated Text
semeval-1.85
Poster
2407.11774
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.86.bib
https://aclanthology.org/2024.semeval-1.86/
@inproceedings{bendahman-etal-2024-irit, title = "{IRIT}-Berger-Levrault at {S}em{E}val-2024: How Sensitive Sentence Embeddings are to Hallucinations?", author = "Bendahman, Nihed and Pinel-sauvagnat, Karen and Hubert, Gilles and Billami, Mokhtar", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.86", doi = "10.18653/v1/2024.semeval-1.86", pages = "573--578", abstract = "This article presents our participation to Task 6 of SemEval-2024, named SHROOM (a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes), which aims at detecting hallucinations. We propose two types of approaches for the task: the first one is based on sentence embeddings and cosine similarity metric, and the second one uses LLMs (Large Language Model). We found that LLMs fail to improve the performance achieved by embedding generation models. The latter outperform the baseline provided by the organizers, and our best system achieves 78{\%} accuracy.", }
This article presents our participation to Task 6 of SemEval-2024, named SHROOM (a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes), which aims at detecting hallucinations. We propose two types of approaches for the task: the first one is based on sentence embeddings and cosine similarity metric, and the second one uses LLMs (Large Language Model). We found that LLMs fail to improve the performance achieved by embedding generation models. The latter outperform the baseline provided by the organizers, and our best system achieves 78{\%} accuracy.
[ "Bendahman, Nihed", "Pinel-sauvagnat, Karen", "Hubert, Gilles", "Billami, Mokhtar" ]
IRIT-Berger-Levrault at SemEval-2024: How Sensitive Sentence Embeddings are to Hallucinations?
semeval-1.86
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.87.bib
https://aclanthology.org/2024.semeval-1.87/
@inproceedings{lau-wu-2024-cyut, title = "{CYUT} at {S}em{E}val-2024 Task 7: A Numerals Augmentation and Feature Enhancement Approach to Numeral Reading Comprehension", author = "Lau, Tsz-yeung and Wu, Shih-hung", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.87", doi = "10.18653/v1/2024.semeval-1.87", pages = "579--585", abstract = "This study explores Task 2 in NumEval-2024, which is SemEval-2024(Semantic Evaluation)Task 7 , focusing on the Reading Comprehension of Numerals in Text (Chinese). The datasetutilized in this study is the Numeral-related Question Answering Dataset (NQuAD), and the model employed is BERT. The data undergoes preprocessing, incorporating Numerals Augmentation and Feature Enhancement to numerical entities before model training. Additionally, fine-tuning will also be applied. The result was an accuracy rate of 77.09{\%}, representing a 7.14{\%} improvement compared to the initial NQuAD processing model, referred to as the Numeracy-Enhanced Model (NEMo).", }
This study explores Task 2 in NumEval-2024, which is SemEval-2024(Semantic Evaluation)Task 7 , focusing on the Reading Comprehension of Numerals in Text (Chinese). The datasetutilized in this study is the Numeral-related Question Answering Dataset (NQuAD), and the model employed is BERT. The data undergoes preprocessing, incorporating Numerals Augmentation and Feature Enhancement to numerical entities before model training. Additionally, fine-tuning will also be applied. The result was an accuracy rate of 77.09{\%}, representing a 7.14{\%} improvement compared to the initial NQuAD processing model, referred to as the Numeracy-Enhanced Model (NEMo).
[ "Lau, Tsz-yeung", "Wu, Shih-hung" ]
CYUT at SemEval-2024 Task 7: A Numerals Augmentation and Feature Enhancement Approach to Numeral Reading Comprehension
semeval-1.87
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.88.bib
https://aclanthology.org/2024.semeval-1.88/
@inproceedings{micluta-campeanu-etal-2024-unibuc, title = "{U}ni{B}uc at {S}em{E}val-2024 Task 2: Tailored Prompting with Solar for Clinical {NLI}", author = "Micluta-Campeanu, Marius and Creanga, Claudiu and Bucur, Ana-maria and Uban, Ana Sabina and Dinu, Liviu P.", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.88", doi = "10.18653/v1/2024.semeval-1.88", pages = "586--595", abstract = "This paper describes the approach of the UniBuc team in tackling the SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. We used SOLAR Instruct, without any fine-tuning, while focusing on input manipulation and tailored prompting. By customizing prompts for individual CTR sections, in both zero-shot and few-shots settings, we managed to achieve a consistency score of 0.72, ranking 14th in the leaderboard. Our thorough error analysis revealed that our model has a tendency to take shortcuts and rely on simple heuristics, especially when dealing with semantic-preserving changes.", }
This paper describes the approach of the UniBuc team in tackling the SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. We used SOLAR Instruct, without any fine-tuning, while focusing on input manipulation and tailored prompting. By customizing prompts for individual CTR sections, in both zero-shot and few-shots settings, we managed to achieve a consistency score of 0.72, ranking 14th in the leaderboard. Our thorough error analysis revealed that our model has a tendency to take shortcuts and rely on simple heuristics, especially when dealing with semantic-preserving changes.
[ "Micluta-Campeanu, Marius", "Creanga, Claudiu", "Bucur, Ana-maria", "Uban, Ana Sabina", "Dinu, Liviu P." ]
UniBuc at SemEval-2024 Task 2: Tailored Prompting with Solar for Clinical NLI
semeval-1.88
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.89.bib
https://aclanthology.org/2024.semeval-1.89/
@inproceedings{laken-2024-fralak, title = "Fralak at {S}em{E}val-2024 Task 4: combining {RNN}-generated hierarchy paths with simple neural nets for hierarchical multilabel text classification in a multilingual zero-shot setting", author = "Laken, Katarina", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.89", doi = "10.18653/v1/2024.semeval-1.89", pages = "596--601", abstract = "This paper describes the submission of team fralak for subtask 1 of task 4 of the Semeval-2024 shared task: {`}Multilingual detection of persuasion techniques in memes{'}. The first subtask included only the textual content of the memes. We restructured the labels into strings that showed the full path through the hierarchy. The system includes an RNN module that is trained to generate these strings. This module was then incorporated in an ensemble model with 2 more models consisting of basic fully connected networks. Although our model did not perform particularly well on the English only setting, we found that it generalized better to other languages in a zero-shot context than most other models. Some additional experiments were performed to explain this. Findings suggest that the RNN generating the restructured labels generalized well across languages, but preprocessing did not seem to play a role. We conclude by giving suggestions for future improvements of our core idea.", }
This paper describes the submission of team fralak for subtask 1 of task 4 of the Semeval-2024 shared task: {`}Multilingual detection of persuasion techniques in memes{'}. The first subtask included only the textual content of the memes. We restructured the labels into strings that showed the full path through the hierarchy. The system includes an RNN module that is trained to generate these strings. This module was then incorporated in an ensemble model with 2 more models consisting of basic fully connected networks. Although our model did not perform particularly well on the English only setting, we found that it generalized better to other languages in a zero-shot context than most other models. Some additional experiments were performed to explain this. Findings suggest that the RNN generating the restructured labels generalized well across languages, but preprocessing did not seem to play a role. We conclude by giving suggestions for future improvements of our core idea.
[ "Laken, Katarina" ]
Fralak at SemEval-2024 Task 4: combining RNN-generated hierarchy paths with simple neural nets for hierarchical multilabel text classification in a multilingual zero-shot setting
semeval-1.89
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.90.bib
https://aclanthology.org/2024.semeval-1.90/
@inproceedings{wunderle-etal-2024-otterlyobsessedwithsemantics, title = "{O}tterly{O}bsessed{W}ith{S}emantics at {S}em{E}val-2024 Task 4: Developing a Hierarchical Multi-Label Classification Head for Large Language Models", author = "Wunderle, Julia and Schubert, Julian and Cacciatore, Antonella and Zehe, Albin and Pfister, Jan and Hotho, Andreas", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.90", doi = "10.18653/v1/2024.semeval-1.90", pages = "602--612", abstract = "For our submission for Subtask 1, we developed a custom classification head that is designed to be applied atop of a Large Language Model. We reconstructed the hierarchy across multiple fully connected layers, allowing us to incorporate previous foundational decisions in subsequent, more fine-grained layers. To find the best hyperparameters, we conducted a grid-search and to compete in the multilingual setting, we translated all documents to English.", }
For our submission for Subtask 1, we developed a custom classification head that is designed to be applied atop of a Large Language Model. We reconstructed the hierarchy across multiple fully connected layers, allowing us to incorporate previous foundational decisions in subsequent, more fine-grained layers. To find the best hyperparameters, we conducted a grid-search and to compete in the multilingual setting, we translated all documents to English.
[ "Wunderle, Julia", "Schubert, Julian", "Cacciatore, Antonella", "Zehe, Albin", "Pfister, Jan", "Hotho, Andreas" ]
OtterlyObsessedWithSemantics at SemEval-2024 Task 4: Developing a Hierarchical Multi-Label Classification Head for Large Language Models
semeval-1.90
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.91.bib
https://aclanthology.org/2024.semeval-1.91/
@inproceedings{altinok-2024-nlp, title = "{D}-{NLP} at {S}em{E}val-2024 Task 2: Evaluating Clinical Inference Capabilities of Large Language Models", author = "Altinok, Duygu", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.91", doi = "10.18653/v1/2024.semeval-1.91", pages = "613--627", abstract = "Large language models (LLMs) have garnered significant attention and widespread usage due to their impressive performance in various tasks. However, they are not without their own set of challenges, including issues such as hallucinations, factual inconsistencies, and limitations in numerical-quantitative reasoning. Evaluating LLMs in miscellaneous reasoning tasks remains an active area of research. Prior to the breakthrough of LLMs, Transformers had already proven successful in the medical domain, effectively employed for various natural language understanding (NLU) tasks. Following this trend, LLMs have also been trained and utilized in the medical domain, raising concerns regarding factual accuracy, adherence tosafety protocols, and inherent limitations. In this paper, we focus on evaluating the natural language inference capabilities of popular open-source and closed-source LLMs using clinical trial reports as the dataset. We present the performance results of each LLM and further analyze their performance on a development set, particularly focusing on challenging instances that involve medical abbreviations and require numerical-quantitative reasoning. Gemini, our leading LLM, achieved a test set F1-score of 0.748, securing the ninth position on the task scoreboard. Our work is the first of its kind, offering a thorough examination of the inference capabilities of LLMs within the medical domain.", }
Large language models (LLMs) have garnered significant attention and widespread usage due to their impressive performance in various tasks. However, they are not without their own set of challenges, including issues such as hallucinations, factual inconsistencies, and limitations in numerical-quantitative reasoning. Evaluating LLMs in miscellaneous reasoning tasks remains an active area of research. Prior to the breakthrough of LLMs, Transformers had already proven successful in the medical domain, effectively employed for various natural language understanding (NLU) tasks. Following this trend, LLMs have also been trained and utilized in the medical domain, raising concerns regarding factual accuracy, adherence tosafety protocols, and inherent limitations. In this paper, we focus on evaluating the natural language inference capabilities of popular open-source and closed-source LLMs using clinical trial reports as the dataset. We present the performance results of each LLM and further analyze their performance on a development set, particularly focusing on challenging instances that involve medical abbreviations and require numerical-quantitative reasoning. Gemini, our leading LLM, achieved a test set F1-score of 0.748, securing the ninth position on the task scoreboard. Our work is the first of its kind, offering a thorough examination of the inference capabilities of LLMs within the medical domain.
[ "Altinok, Duygu" ]
D-NLP at SemEval-2024 Task 2: Evaluating Clinical Inference Capabilities of Large Language Models
semeval-1.91
Poster
2405.04170
[ "https://github.com/duygua/semeval2024_nli4ct" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.92.bib
https://aclanthology.org/2024.semeval-1.92/
@inproceedings{li-etal-2024-lmeme, title = "{LMEME} at {S}em{E}val-2024 Task 4: Teacher Student Fusion - Integrating {CLIP} with {LLM}s for Enhanced Persuasion Detection", author = "Li, Shiyi and Wang, Yike and Yang, Liang and Zhang, Shaowu and Lin, Hongfei", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.92", doi = "10.18653/v1/2024.semeval-1.92", pages = "628--633", abstract = "This paper describes our system used in the SemEval-2024 Task 4 Multilingual Detection of Persuasion Techniques in Memes. Our team proposes a detection system that employs a Teacher Student Fusion framework. Initially, a Large Language Model serves as the teacher, engaging in abductive reasoning on multimodal inputs to generate background knowledge on persuasion techniques, assisting in the training of a smaller downstream model. The student model adopts CLIP as an encoder for text and image features, and we incorporate an attention mechanism for modality alignment. Ultimately, our proposed system achieves a Macro-F1 score of 0.8103, ranking 1st out of 20 on the leaderboard of Subtask 2b in English. In Bulgarian, Macedonian and Arabic, our detection capabilities are ranked 1/15, 3/15 and 14/15.", }
This paper describes our system used in the SemEval-2024 Task 4 Multilingual Detection of Persuasion Techniques in Memes. Our team proposes a detection system that employs a Teacher Student Fusion framework. Initially, a Large Language Model serves as the teacher, engaging in abductive reasoning on multimodal inputs to generate background knowledge on persuasion techniques, assisting in the training of a smaller downstream model. The student model adopts CLIP as an encoder for text and image features, and we incorporate an attention mechanism for modality alignment. Ultimately, our proposed system achieves a Macro-F1 score of 0.8103, ranking 1st out of 20 on the leaderboard of Subtask 2b in English. In Bulgarian, Macedonian and Arabic, our detection capabilities are ranked 1/15, 3/15 and 14/15.
[ "Li, Shiyi", "Wang, Yike", "Yang, Liang", "Zhang, Shaowu", "Lin, Hongfei" ]
LMEME at SemEval-2024 Task 4: Teacher Student Fusion - Integrating CLIP with LLMs for Enhanced Persuasion Detection
semeval-1.92
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.93.bib
https://aclanthology.org/2024.semeval-1.93/
@inproceedings{shanbhag-etal-2024-innovators, title = "Innovators at {S}em{E}val-2024 Task 10: Revolutionizing Emotion Recognition and Flip Analysis in Code-Mixed Texts", author = "Shanbhag, Abhay and Jadhav, Suramya and Rathi, Shashank and Pande, Siddhesh and Kadam, Dipali", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.93", doi = "10.18653/v1/2024.semeval-1.93", pages = "634--641", abstract = "In this paper, we introduce our system for all three tracks of the SemEval 2024 EDiReF Shared Task 10, which focuses on Emotion Recognition in Conversation (ERC) and Emotion Flip Reasoning (EFR) within the domain of conversational analysis. Task-Track 1 (ERC) aims to assign an emotion to each utterance in the Hinglish language from a predefined set of possible emotions. Tracks 2 (EFR) and 3 (EFR) aim to identify the trigger utterance(s) for an emotion flip in a multi-party conversation dialogue in Hinglish and English text, respectively. For Track 1, our study spans both traditional machine learning ensemble techniques, including Decision Trees, SVM, Logistic Regression, and Multinomial NB models, as well as advanced transformer-based models like XLM-Roberta (XLMR), DistilRoberta, and T5 from Hugging Face{'}s transformer library. In the EFR competition, we developed and proposed two innovative algorithms to tackle the challenges presented in Tracks 2 and 3. Specifically, our team, Innovators, developed a standout algorithm that propelled us to secure the 2nd rank in Track 2, achieving an impressive F1 score of 0.79, and the 7th rank in Track 3, with an F1 score of 0.68.", }
In this paper, we introduce our system for all three tracks of the SemEval 2024 EDiReF Shared Task 10, which focuses on Emotion Recognition in Conversation (ERC) and Emotion Flip Reasoning (EFR) within the domain of conversational analysis. Task-Track 1 (ERC) aims to assign an emotion to each utterance in the Hinglish language from a predefined set of possible emotions. Tracks 2 (EFR) and 3 (EFR) aim to identify the trigger utterance(s) for an emotion flip in a multi-party conversation dialogue in Hinglish and English text, respectively. For Track 1, our study spans both traditional machine learning ensemble techniques, including Decision Trees, SVM, Logistic Regression, and Multinomial NB models, as well as advanced transformer-based models like XLM-Roberta (XLMR), DistilRoberta, and T5 from Hugging Face{'}s transformer library. In the EFR competition, we developed and proposed two innovative algorithms to tackle the challenges presented in Tracks 2 and 3. Specifically, our team, Innovators, developed a standout algorithm that propelled us to secure the 2nd rank in Track 2, achieving an impressive F1 score of 0.79, and the 7th rank in Track 3, with an F1 score of 0.68.
[ "Shanbhag, Abhay", "Jadhav, Suramya", "Rathi, Shashank", "P", "e, Siddhesh", "Kadam, Dipali" ]
Innovators at SemEval-2024 Task 10: Revolutionizing Emotion Recognition and Flip Analysis in Code-Mixed Texts
semeval-1.93
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.94.bib
https://aclanthology.org/2024.semeval-1.94/
@inproceedings{yu-etal-2024-dutir938, title = "{DUTIR}938 at {S}em{E}val-2024 Task 4: Semi-Supervised Learning and Model Ensemble for Persuasion Techniques Detection in Memes", author = "Yu, Erchen and Wang, Junlong and Qiao, Xuening and Qi, Jiewei and Li, Zhaoqing and Lin, Hongfei and Zong, Linlin and Xu, Bo", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.94", doi = "10.18653/v1/2024.semeval-1.94", pages = "642--648", abstract = "The development of social platforms has facilitated the proliferation of disinformation, with memes becoming one of the most popular types of propaganda for disseminating disinformation on the internet. Effectively detecting the persuasion techniques hidden within memes is helpful in understanding user-generated content and further promoting the detection of disinformation on the internet. This paper demonstrates the approach proposed by Team DUTIR938 in Subtask 2b of SemEval-2024 Task 4. We propose a dual-channel model based on semi-supervised learning and model ensemble. We utilize CLIP to extract image features, and employ various pretrained language models under task-adaptive pretraining for text feature extraction. To enhance the detection and generalization capabilities of the model, we implement sample data augmentation using semi-supervised pseudo-labeling methods, introduce adversarial training strategies, and design a two-stage global model ensemble strategy. Our proposed method surpasses the provided baseline method, with Macro/Micro F1 values of 0.80910/0.83667 in the English leaderboard. Our submission ranks 3rd/19 in terms of Macro F1 and 1st/19 in terms of Micro F1.", }
The development of social platforms has facilitated the proliferation of disinformation, with memes becoming one of the most popular types of propaganda for disseminating disinformation on the internet. Effectively detecting the persuasion techniques hidden within memes is helpful in understanding user-generated content and further promoting the detection of disinformation on the internet. This paper demonstrates the approach proposed by Team DUTIR938 in Subtask 2b of SemEval-2024 Task 4. We propose a dual-channel model based on semi-supervised learning and model ensemble. We utilize CLIP to extract image features, and employ various pretrained language models under task-adaptive pretraining for text feature extraction. To enhance the detection and generalization capabilities of the model, we implement sample data augmentation using semi-supervised pseudo-labeling methods, introduce adversarial training strategies, and design a two-stage global model ensemble strategy. Our proposed method surpasses the provided baseline method, with Macro/Micro F1 values of 0.80910/0.83667 in the English leaderboard. Our submission ranks 3rd/19 in terms of Macro F1 and 1st/19 in terms of Micro F1.
[ "Yu, Erchen", "Wang, Junlong", "Qiao, Xuening", "Qi, Jiewei", "Li, Zhaoqing", "Lin, Hongfei", "Zong, Linlin", "Xu, Bo" ]
DUTIR938 at SemEval-2024 Task 4: Semi-Supervised Learning and Model Ensemble for Persuasion Techniques Detection in Memes
semeval-1.94
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.95.bib
https://aclanthology.org/2024.semeval-1.95/
@inproceedings{creanga-dinu-2024-isds, title = "{ISDS}-{NLP} at {S}em{E}val-2024 Task 10: Transformer based neural networks for emotion recognition in conversations", author = "Creanga, Claudiu and Dinu, Liviu P.", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.95", doi = "10.18653/v1/2024.semeval-1.95", pages = "649--654", abstract = "This paper outlines the approach of the ISDS-NLP team in the SemEval 2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF). For Subtask 1 we obtained a weighted F1 score of 0.43 and placed 12 in the leaderboard. We investigate two distinct approaches: Masked Language Modeling (MLM) and Causal Language Modeling (CLM). For MLM, we employ pre-trained BERT-like models in a multilingual setting, fine-tuning them with a classifier to predict emotions. Experiments with varying input lengths, classifier architectures, and fine-tuning strategies demonstrate the effectiveness of this approach. Additionally, we utilize Mistral 7B Instruct V0.2, a state-of-the-art model, applying zero-shot and few-shot prompting techniques. Our findings indicate that while Mistral shows promise, MLMs currently outperform them in sentence-level emotion classification.", }
This paper outlines the approach of the ISDS-NLP team in the SemEval 2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF). For Subtask 1 we obtained a weighted F1 score of 0.43 and placed 12 in the leaderboard. We investigate two distinct approaches: Masked Language Modeling (MLM) and Causal Language Modeling (CLM). For MLM, we employ pre-trained BERT-like models in a multilingual setting, fine-tuning them with a classifier to predict emotions. Experiments with varying input lengths, classifier architectures, and fine-tuning strategies demonstrate the effectiveness of this approach. Additionally, we utilize Mistral 7B Instruct V0.2, a state-of-the-art model, applying zero-shot and few-shot prompting techniques. Our findings indicate that while Mistral shows promise, MLMs currently outperform them in sentence-level emotion classification.
[ "Creanga, Claudiu", "Dinu, Liviu P." ]
ISDS-NLP at SemEval-2024 Task 10: Transformer based neural networks for emotion recognition in conversations
semeval-1.95
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.96.bib
https://aclanthology.org/2024.semeval-1.96/
@inproceedings{pan-etal-2024-umuteam, title = "{UMUT}eam at {S}em{E}val-2024 Task 4: Multimodal Identification of Persuasive Techniques in Memes through Large Language Models", author = "Pan, Ronghao and Garc{\'\i}a-d{\'\i}az, Jos{\'e} Antonio and Valencia-garc{\'\i}a, Rafael", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.96", doi = "10.18653/v1/2024.semeval-1.96", pages = "655--666", abstract = "In this manuscript we describe the UMUTeam{'}s participation in SemEval-2024 Task 4, a shared task to identify different persuasion techniques in memes. The task is divided into three subtasks. One is a multimodal subtask of identifying whether a meme contains persuasion or not. The others are hierarchical multi-label classifications that consider textual content alone or a multimodal setting of text and visual content. This is a multilingual task, and we participated in all three subtasks but we focus only on the English dataset. Our approach is based on a fine-tuning approach with the pre-trained RoBERTa-large model. In addition, for multimodal cases with both textual and visual content, we used the LMM called LlaVa to extract image descriptions and combine them with the meme text. Our system performed well in three subtasks, achieving the tenth best result with an Hierarchical F1 of 64.774{\%}, the fourth best in Subtask 2a with an Hierarchical F1 of 69.003{\%}, and the eighth best in Subtask 2b with a Macro F1 of 78.660{\%}.", }
In this manuscript we describe the UMUTeam{'}s participation in SemEval-2024 Task 4, a shared task to identify different persuasion techniques in memes. The task is divided into three subtasks. One is a multimodal subtask of identifying whether a meme contains persuasion or not. The others are hierarchical multi-label classifications that consider textual content alone or a multimodal setting of text and visual content. This is a multilingual task, and we participated in all three subtasks but we focus only on the English dataset. Our approach is based on a fine-tuning approach with the pre-trained RoBERTa-large model. In addition, for multimodal cases with both textual and visual content, we used the LMM called LlaVa to extract image descriptions and combine them with the meme text. Our system performed well in three subtasks, achieving the tenth best result with an Hierarchical F1 of 64.774{\%}, the fourth best in Subtask 2a with an Hierarchical F1 of 69.003{\%}, and the eighth best in Subtask 2b with a Macro F1 of 78.660{\%}.
[ "Pan, Ronghao", "Garc{\\'\\i}a-d{\\'\\i}az, Jos{\\'e} Antonio", "Valencia-garc{\\'\\i}a, Rafael" ]
UMUTeam at SemEval-2024 Task 4: Multimodal Identification of Persuasive Techniques in Memes through Large Language Models
semeval-1.96
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.97.bib
https://aclanthology.org/2024.semeval-1.97/
@inproceedings{cheng-etal-2024-mips, title = "{MIPS} at {S}em{E}val-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models", author = "Cheng, Zebang and Niu, Fuqiang and Lin, Yuxiang and Cheng, Zhi-qi and Peng, Xiaojiang and Zhang, Bowen", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.97", doi = "10.18653/v1/2024.semeval-1.97", pages = "667--674", abstract = "This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations. We propose a novel Multimodal Emotion Recognition and Multimodal Emotion Cause Extraction (MER-MCE) framework that integrates text, audio, and visual modalities using specialized emotion encoders. Our approach sets itself apart from top-performing teams by leveraging modality-specific features for enhanced emotion understanding and causality inference. Experimental evaluation demonstrates the advantages of our multimodal approach, with our submission achieving a competitive weighted F1 score of 0.3435, ranking third with a margin of only 0.0339 behind the 1st team and 0.0025 behind the 2nd team.", }
This paper presents our winning submission to Subtask 2 of SemEval 2024 Task 3 on multimodal emotion cause analysis in conversations. We propose a novel Multimodal Emotion Recognition and Multimodal Emotion Cause Extraction (MER-MCE) framework that integrates text, audio, and visual modalities using specialized emotion encoders. Our approach sets itself apart from top-performing teams by leveraging modality-specific features for enhanced emotion understanding and causality inference. Experimental evaluation demonstrates the advantages of our multimodal approach, with our submission achieving a competitive weighted F1 score of 0.3435, ranking third with a margin of only 0.0339 behind the 1st team and 0.0025 behind the 2nd team.
[ "Cheng, Zebang", "Niu, Fuqiang", "Lin, Yuxiang", "Cheng, Zhi-qi", "Peng, Xiaojiang", "Zhang, Bowen" ]
MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
semeval-1.97
Poster
2404.00511
[ "https://github.com/mips-colt/mer-mce" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.98.bib
https://aclanthology.org/2024.semeval-1.98/
@inproceedings{pan-etal-2024-umuteam-semeval, title = "{UMUT}eam at {S}em{E}val-2024 Task 6: Leveraging Zero-Shot Learning for Detecting Hallucinations and Related Observable Overgeneration Mistakes", author = "Pan, Ronghao and Garc{\'\i}a-d{\'\i}az, Jos{\'e} Antonio and Bernal-beltr{\'a}n, Tom{\'a}s and Valencia-garc{\'\i}a, Rafael", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.98", doi = "10.18653/v1/2024.semeval-1.98", pages = "675--681", abstract = "In these working notes we describe the UMUTeam{'}s participation in SemEval-2024 shared task 6, which aims at detecting grammatically correct output of Natural Language Generation with incorrect semantic information in two different setups: model-aware and model-agnostic tracks. The task is consists of three subtasks with different model setups. Our approach is based on exploiting the zero-shot classification capability of the Large Language Models LLaMa-2, Tulu and Mistral, through prompt engineering. Our system ranked eighteenth in the model-aware setup with an accuracy of 78.4{\%} and 29th in the model-agnostic setup with an accuracy of 76.9333{\%}.", }
In these working notes we describe the UMUTeam{'}s participation in SemEval-2024 shared task 6, which aims at detecting grammatically correct output of Natural Language Generation with incorrect semantic information in two different setups: model-aware and model-agnostic tracks. The task is consists of three subtasks with different model setups. Our approach is based on exploiting the zero-shot classification capability of the Large Language Models LLaMa-2, Tulu and Mistral, through prompt engineering. Our system ranked eighteenth in the model-aware setup with an accuracy of 78.4{\%} and 29th in the model-agnostic setup with an accuracy of 76.9333{\%}.
[ "Pan, Ronghao", "Garc{\\'\\i}a-d{\\'\\i}az, Jos{\\'e} Antonio", "Bernal-beltr{\\'a}n, Tom{\\'a}s", "Valencia-garc{\\'\\i}a, Rafael" ]
UMUTeam at SemEval-2024 Task 6: Leveraging Zero-Shot Learning for Detecting Hallucinations and Related Observable Overgeneration Mistakes
semeval-1.98
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.99.bib
https://aclanthology.org/2024.semeval-1.99/
@inproceedings{verma-raithel-2024-dfki, title = "{DFKI}-{NLP} at {S}em{E}val-2024 Task 2: Towards Robust {LLM}s Using Data Perturbations and {M}in{M}ax Training", author = "Verma, Bhuvanesh and Raithel, Lisa", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.99", doi = "10.18653/v1/2024.semeval-1.99", pages = "682--696", abstract = "The NLI4CT task at SemEval-2024 emphasizes the development of robust models for Natural Language Inference on Clinical Trial Reports (CTRs) using large language models (LLMs). This edition introduces interventions specifically targeting the numerical, vocabulary, and semantic aspects of CTRs. Our proposed system harnesses the capabilities of the state-of-the-art Mistral model (Jiang et al., 2023), complemented by an auxiliary model, to focus on the intricate input space of the NLI4CT dataset. Through the incorporation of numerical and acronym-based perturbations to the data, we train a robust system capable of handling both semantic-altering and numerical contradiction interventions. Our analysis on the dataset sheds light on the challenging sections of the CTRs for reasoning.", }
The NLI4CT task at SemEval-2024 emphasizes the development of robust models for Natural Language Inference on Clinical Trial Reports (CTRs) using large language models (LLMs). This edition introduces interventions specifically targeting the numerical, vocabulary, and semantic aspects of CTRs. Our proposed system harnesses the capabilities of the state-of-the-art Mistral model (Jiang et al., 2023), complemented by an auxiliary model, to focus on the intricate input space of the NLI4CT dataset. Through the incorporation of numerical and acronym-based perturbations to the data, we train a robust system capable of handling both semantic-altering and numerical contradiction interventions. Our analysis on the dataset sheds light on the challenging sections of the CTRs for reasoning.
[ "Verma, Bhuvanesh", "Raithel, Lisa" ]
DFKI-NLP at SemEval-2024 Task 2: Towards Robust LLMs Using Data Perturbations and MinMax Training
semeval-1.99
Poster
2405.00321
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.100.bib
https://aclanthology.org/2024.semeval-1.100/
@inproceedings{pan-etal-2024-umuteam-semeval-2024, title = "{UMUT}eam at {S}em{E}val-2024 Task 8: Combining Transformers and Syntax Features for Machine-Generated Text Detection", author = "Pan, Ronghao and Garc{\'\i}a-d{\'\i}az, Jos{\'e} Antonio and Vivancos-vicente, Pedro Jos{\'e} and Valencia-garc{\'\i}a, Rafael", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.100", doi = "10.18653/v1/2024.semeval-1.100", pages = "697--702", abstract = "These working notes describe the UMUTeam{'}s participation in Task 8 of SemEval-2024 entitled {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection{''}. This shared task aims at identifying machine-generated text in order to mitigate its potential misuse. This shared task is divided into three subtasks: Subtask A, a binary classification task to determine whether a given full-text was written by a human or generated by a machine; Subtask B, a multi-class classification problem to determine, given a full-text, who generated it. It can be written by a human or generated by a specific language model; and Subtask C, mixed human-machine text recognition. We participated in Subtask B, using an approach based on fine-tuning a pre-trained model, such as RoBERTa, combined with syntactic features of the texts. Our system placed 23rd out of a total of 77 participants, with a score of 75.350{\%}, outperforming the baseline.", }
These working notes describe the UMUTeam{'}s participation in Task 8 of SemEval-2024 entitled {``}Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection{''}. This shared task aims at identifying machine-generated text in order to mitigate its potential misuse. This shared task is divided into three subtasks: Subtask A, a binary classification task to determine whether a given full-text was written by a human or generated by a machine; Subtask B, a multi-class classification problem to determine, given a full-text, who generated it. It can be written by a human or generated by a specific language model; and Subtask C, mixed human-machine text recognition. We participated in Subtask B, using an approach based on fine-tuning a pre-trained model, such as RoBERTa, combined with syntactic features of the texts. Our system placed 23rd out of a total of 77 participants, with a score of 75.350{\%}, outperforming the baseline.
[ "Pan, Ronghao", "Garc{\\'\\i}a-d{\\'\\i}az, Jos{\\'e} Antonio", "Vivancos-vicente, Pedro Jos{\\'e}", "Valencia-garc{\\'\\i}a, Rafael" ]
UMUTeam at SemEval-2024 Task 8: Combining Transformers and Syntax Features for Machine-Generated Text Detection
semeval-1.100
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.101.bib
https://aclanthology.org/2024.semeval-1.101/
@inproceedings{pan-etal-2024-umuteam-semeval-2024-task, title = "{UMUT}eam at {S}em{E}val-2024 Task 10: Discovering and Reasoning about Emotions in Conversation using Transformers", author = "Pan, Ronghao and Garc{\'\i}a-d{\'\i}az, Jos{\'e} Antonio and Rold{\'a}n, Diego and Valencia-garc{\'\i}a, Rafael", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.101", doi = "10.18653/v1/2024.semeval-1.101", pages = "703--709", abstract = "These notes describe the participation of the UMUTeam in EDiReF, the 10th shared task of SemEval 2024. The goal is to develop systems for detecting and inferring emotional changes in the conversation. The task was divided into three related subtasks: (i) Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations, (ii) Emotion Flip Reasoning (EFR) in Hindi-English code-mixed conversations, and (iii) EFR in English conversations. We were involved in all three and our approach is based on a fine-tuning approach with different pre-trained models. After evaluation, we found BERT to be the best model for ERC and EFR and with this model we achieved the thirteenth best result with an F1 score of 43{\%} in Subtask 1, the sixth best in Subtask 2 with an F1 score of 26{\%} and the fifteenth best in Subtask 3 with an F1 score of 22{\%}.", }
These notes describe the participation of the UMUTeam in EDiReF, the 10th shared task of SemEval 2024. The goal is to develop systems for detecting and inferring emotional changes in the conversation. The task was divided into three related subtasks: (i) Emotion Recognition in Conversation (ERC) in Hindi-English code-mixed conversations, (ii) Emotion Flip Reasoning (EFR) in Hindi-English code-mixed conversations, and (iii) EFR in English conversations. We were involved in all three and our approach is based on a fine-tuning approach with different pre-trained models. After evaluation, we found BERT to be the best model for ERC and EFR and with this model we achieved the thirteenth best result with an F1 score of 43{\%} in Subtask 1, the sixth best in Subtask 2 with an F1 score of 26{\%} and the fifteenth best in Subtask 3 with an F1 score of 22{\%}.
[ "Pan, Ronghao", "Garc{\\'\\i}a-d{\\'\\i}az, Jos{\\'e} Antonio", "Rold{\\'a}n, Diego", "Valencia-garc{\\'\\i}a, Rafael" ]
UMUTeam at SemEval-2024 Task 10: Discovering and Reasoning about Emotions in Conversation using Transformers
semeval-1.101
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.102.bib
https://aclanthology.org/2024.semeval-1.102/
@inproceedings{qu-meng-2024-tm, title = "{TM}-{TREK} at {S}em{E}val-2024 Task 8: Towards {LLM}-Based Automatic Boundary Detection for Human-Machine Mixed Text", author = "Qu, Xiaoyan and Meng, Xiangfeng", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.102", doi = "10.18653/v1/2024.semeval-1.102", pages = "710--715", abstract = "With the increasing prevalence of text gener- ated by large language models (LLMs), there is a growing concern about distinguishing be- tween LLM-generated and human-written texts in order to prevent the misuse of LLMs, such as the dissemination of misleading information and academic dishonesty. Previous research has primarily focused on classifying text as ei- ther entirely human-written or LLM-generated, neglecting the detection of mixed texts that con- tain both types of content. This paper explores LLMs{'} ability to identify boundaries in human- written and machine-generated mixed texts. We approach this task by transforming it into a to- ken classification problem and regard the label turning point as the boundary. Notably, our ensemble model of LLMs achieved first place in the {`}Human-Machine Mixed Text Detection{'} sub-task of the SemEval{'}24 Competition Task 8. Additionally, we investigate factors that in- fluence the capability of LLMs in detecting boundaries within mixed texts, including the incorporation of extra layers on top of LLMs, combination of segmentation loss, and the im- pact of pretraining. Our findings aim to provide valuable insights for future research in this area.", }
With the increasing prevalence of text gener- ated by large language models (LLMs), there is a growing concern about distinguishing be- tween LLM-generated and human-written texts in order to prevent the misuse of LLMs, such as the dissemination of misleading information and academic dishonesty. Previous research has primarily focused on classifying text as ei- ther entirely human-written or LLM-generated, neglecting the detection of mixed texts that con- tain both types of content. This paper explores LLMs{'} ability to identify boundaries in human- written and machine-generated mixed texts. We approach this task by transforming it into a to- ken classification problem and regard the label turning point as the boundary. Notably, our ensemble model of LLMs achieved first place in the {`}Human-Machine Mixed Text Detection{'} sub-task of the SemEval{'}24 Competition Task 8. Additionally, we investigate factors that in- fluence the capability of LLMs in detecting boundaries within mixed texts, including the incorporation of extra layers on top of LLMs, combination of segmentation loss, and the im- pact of pretraining. Our findings aim to provide valuable insights for future research in this area.
[ "Qu, Xiaoyan", "Meng, Xiangfeng" ]
TM-TREK at SemEval-2024 Task 8: Towards LLM-Based Automatic Boundary Detection for Human-Machine Mixed Text
semeval-1.102
Poster
2404.00899
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.103.bib
https://aclanthology.org/2024.semeval-1.103/
@inproceedings{rajpoot-chukamphaeng-2024-team, title = "Team {NP}{\_}{PROBLEM} at {S}em{E}val-2024 Task 7: Numerical Reasoning in Headline Generation with Preference Optimization", author = "Rajpoot, Pawan and Chukamphaeng, Nut", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.103", doi = "10.18653/v1/2024.semeval-1.103", pages = "716--720", abstract = "While large language models (LLMs) exhibit impressive linguistic abilities, their numerical reasoning skills within real-world contexts re- main under-explored. This paper describes our participation in a headline-generation challenge by Numeval at Semeval 2024, which focused on numerical reasoning. Our system achieved an overall top numerical accuracy of 73.49{\%} on the task. We explore the system{'}s design choices contributing to this result and analyze common error patterns. Our findings highlight the potential and ongoing challenges of integrat- ing numerical reasoning within large language model-based headline generation.", }
While large language models (LLMs) exhibit impressive linguistic abilities, their numerical reasoning skills within real-world contexts re- main under-explored. This paper describes our participation in a headline-generation challenge by Numeval at Semeval 2024, which focused on numerical reasoning. Our system achieved an overall top numerical accuracy of 73.49{\%} on the task. We explore the system{'}s design choices contributing to this result and analyze common error patterns. Our findings highlight the potential and ongoing challenges of integrat- ing numerical reasoning within large language model-based headline generation.
[ "Rajpoot, Pawan", "Chukamphaeng, Nut" ]
Team NP_PROBLEM at SemEval-2024 Task 7: Numerical Reasoning in Headline Generation with Preference Optimization
semeval-1.103
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.104.bib
https://aclanthology.org/2024.semeval-1.104/
@inproceedings{chen-etal-2024-opdai, title = "{OPDAI} at {S}em{E}val-2024 Task 6: Small {LLM}s can Accelerate Hallucination Detection with Weakly Supervised Data", author = "Chen, Ze and Wei, Chengcheng and Fang, Songtan and He, Jiarong and Gao, Max", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.104", doi = "10.18653/v1/2024.semeval-1.104", pages = "721--729", abstract = "This paper mainly describes a unified system for hallucination detection of LLMs, which wins the second prize in the model-agnostic track of the SemEval-2024 Task 6, and also achieves considerable results in the model-aware track. This task aims to detect hallucination with LLMs for three different text-generation tasks without labeled training data. We utilize prompt engineering and few-shot learning to verify the performance of different LLMs on the validation data. Then we select the LLMs with better performance to generate high-quality weakly supervised training data, which not only satisfies the consistency of different LLMs, but also satisfies the consistency of the optimal LLM with different sampling parameters. Furthermore, we finetune different LLMs by using the constructed training data, and finding that a relatively small LLM can achieve a competitive level of performance in hallucination detection, when compared to the large LLMs and the prompt-based approaches using GPT-4.", }
This paper mainly describes a unified system for hallucination detection of LLMs, which wins the second prize in the model-agnostic track of the SemEval-2024 Task 6, and also achieves considerable results in the model-aware track. This task aims to detect hallucination with LLMs for three different text-generation tasks without labeled training data. We utilize prompt engineering and few-shot learning to verify the performance of different LLMs on the validation data. Then we select the LLMs with better performance to generate high-quality weakly supervised training data, which not only satisfies the consistency of different LLMs, but also satisfies the consistency of the optimal LLM with different sampling parameters. Furthermore, we finetune different LLMs by using the constructed training data, and finding that a relatively small LLM can achieve a competitive level of performance in hallucination detection, when compared to the large LLMs and the prompt-based approaches using GPT-4.
[ "Chen, Ze", "Wei, Chengcheng", "Fang, Songtan", "He, Jiarong", "Gao, Max" ]
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data
semeval-1.104
Poster
2402.12913
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.105.bib
https://aclanthology.org/2024.semeval-1.105/
@inproceedings{arumugam-etal-2024-ssn, title = "{SSN}{\_}{ARMM} at {S}em{E}val-2024 Task 10: Emotion Detection in Multilingual Code-Mixed Conversations using {L}inear{SVC} and {TF}-{IDF}", author = "Arumugam, Rohith and Deborah, Angel and Sivanaiah, Rajalakshmi and R S, Milton and Thankanadar, Mirnalinee", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.105", doi = "10.18653/v1/2024.semeval-1.105", pages = "730--736", abstract = "Our paper explores a task involving the analysis of emotions and triggers within dialogues. We annotate each utterance with an emotion and identify triggers, focusing on binary labeling. We emphasize clear guidelines for replicability and conduct thorough analyses, including multiple system runs and experiments to highlight effective techniques. By simplifying the complexities and detailing clear methodologies, our study contributes to advancing emotion analysis and trigger identification within dialogue systems.", }
Our paper explores a task involving the analysis of emotions and triggers within dialogues. We annotate each utterance with an emotion and identify triggers, focusing on binary labeling. We emphasize clear guidelines for replicability and conduct thorough analyses, including multiple system runs and experiments to highlight effective techniques. By simplifying the complexities and detailing clear methodologies, our study contributes to advancing emotion analysis and trigger identification within dialogue systems.
[ "Arumugam, Rohith", "Deborah, Angel", "Sivanaiah, Rajalakshmi", "R S, Milton", "Thankanadar, Mirnalinee" ]
SSN_ARMM at SemEval-2024 Task 10: Emotion Detection in Multilingual Code-Mixed Conversations using LinearSVC and TF-IDF
semeval-1.105
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.106.bib
https://aclanthology.org/2024.semeval-1.106/
@inproceedings{smilga-alabiad-2024-tuduo, title = {{T}{\"u}{D}uo at {S}em{E}val-2024 Task 2: Flan-T5 and Data Augmentation for Biomedical {NLI}}, author = "Smilga, Veronika and Alabiad, Hazem", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.106", doi = "10.18653/v1/2024.semeval-1.106", pages = "737--744", abstract = "This paper explores using data augmentation with smaller language models under 3 billion parameters for the SemEval-2024 Task 2 on Biomedical Natural Language Inference for Clinical Trials. We fine-tune models from the Flan-T5 family with and without using augmented data automatically generated by GPT-3.5-Turbo and find that data augmentation through techniques like synonym replacement, syntactic changes, adding random facts, and meaning reversion improves model faithfulness (ability to change predictions for semantically different inputs) and consistency (ability to give same predictions for semantic preserving changes). However, data augmentation tends to decrease performance on the original dataset distribution, as measured by F1 score. Our best system is the Flan-T5 XL model fine-tuned on the original training data combined with over 6,000 augmented examples. The system ranks in the top 10 for all three metrics.", }
This paper explores using data augmentation with smaller language models under 3 billion parameters for the SemEval-2024 Task 2 on Biomedical Natural Language Inference for Clinical Trials. We fine-tune models from the Flan-T5 family with and without using augmented data automatically generated by GPT-3.5-Turbo and find that data augmentation through techniques like synonym replacement, syntactic changes, adding random facts, and meaning reversion improves model faithfulness (ability to change predictions for semantically different inputs) and consistency (ability to give same predictions for semantic preserving changes). However, data augmentation tends to decrease performance on the original dataset distribution, as measured by F1 score. Our best system is the Flan-T5 XL model fine-tuned on the original training data combined with over 6,000 augmented examples. The system ranks in the top 10 for all three metrics.
[ "Smilga, Veronika", "Alabiad, Hazem" ]
TüDuo at SemEval-2024 Task 2: Flan-T5 and Data Augmentation for Biomedical NLI
semeval-1.106
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.107.bib
https://aclanthology.org/2024.semeval-1.107/
@inproceedings{shaik-etal-2024-feedforward, title = "{F}eed{F}orward at {S}em{E}val-2024 Task 10: Trigger and sentext-height enriched emotion analysis in multi-party conversations", author = "Shaik, Zuhair Hasan and Prasanna, Dhivya and Jahnavi, Enduri and Thippireddy, Rishi and Madhav, Vamsi and Saumya, Sunil and Biradar, Shankar", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.107", doi = "10.18653/v1/2024.semeval-1.107", pages = "745--756", abstract = "This paper reports on an innovative approach to Emotion Recognition in Conversation and Emotion Flip Reasoning for the SemEval-2024 competition with a specific focus on analyzing Hindi-English code-mixed language. By integrating Large Language Models (LLMs) with Instruction-based Fine-tuning and Quantized Low-Rank Adaptation (QLoRA), this study introduces innovative techniques like Sentext-height and advanced prompting strategies to navigate the intricacies of emotional analysis in code-mixed conversational data. The results of the proposed work effectively demonstrate its ability to overcome label bias and the complexities of code-mixed languages. Our team achieved ranks of 5, 3, and 3 in tasks 1, 2, and 3 respectively. This study contributes valuable insights and methods for enhancing emotion recognition models, underscoring the importance of continuous research in this field.", }
This paper reports on an innovative approach to Emotion Recognition in Conversation and Emotion Flip Reasoning for the SemEval-2024 competition with a specific focus on analyzing Hindi-English code-mixed language. By integrating Large Language Models (LLMs) with Instruction-based Fine-tuning and Quantized Low-Rank Adaptation (QLoRA), this study introduces innovative techniques like Sentext-height and advanced prompting strategies to navigate the intricacies of emotional analysis in code-mixed conversational data. The results of the proposed work effectively demonstrate its ability to overcome label bias and the complexities of code-mixed languages. Our team achieved ranks of 5, 3, and 3 in tasks 1, 2, and 3 respectively. This study contributes valuable insights and methods for enhancing emotion recognition models, underscoring the importance of continuous research in this field.
[ "Shaik, Zuhair Hasan", "Prasanna, Dhivya", "Jahnavi, Enduri", "Thippireddy, Rishi", "Madhav, Vamsi", "Saumya, Sunil", "Biradar, Shankar" ]
FeedForward at SemEval-2024 Task 10: Trigger and sentext-height enriched emotion analysis in multi-party conversations
semeval-1.107
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.108.bib
https://aclanthology.org/2024.semeval-1.108/
@inproceedings{shi-etal-2024-ynu, title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task 5: Regularized Legal-{BERT} for Legal Argument Reasoning Task in Civil Procedure", author = "Shi, Peng and Wang, Jin and Zhang, Xuejie", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.108", doi = "10.18653/v1/2024.semeval-1.108", pages = "757--762", abstract = "This paper describes the submission of team YNU-HPCC to SemEval-2024 for Task 5: The Legal Argument Reasoning Task in Civil Procedure. The task asks candidates the topic, questions, and answers, classifying whether a given candidate{'}s answer is correct (True) or incorrect (False). To make a sound judgment, we propose a system. This system is based on fine-tuning the Legal-BERT model that specializes in solving legal problems. Meanwhile,Regularized Dropout (R-Drop) and focal Loss were used in the model. R-Drop is used for data augmentation, and focal loss addresses data imbalances. Our system achieved relatively good results on the competition{'}s official leaderboard. The code of this paper is available at https://github.com/YNU-PengShi/SemEval-2024-Task5.", }
This paper describes the submission of team YNU-HPCC to SemEval-2024 for Task 5: The Legal Argument Reasoning Task in Civil Procedure. The task asks candidates the topic, questions, and answers, classifying whether a given candidate{'}s answer is correct (True) or incorrect (False). To make a sound judgment, we propose a system. This system is based on fine-tuning the Legal-BERT model that specializes in solving legal problems. Meanwhile,Regularized Dropout (R-Drop) and focal Loss were used in the model. R-Drop is used for data augmentation, and focal loss addresses data imbalances. Our system achieved relatively good results on the competition{'}s official leaderboard. The code of this paper is available at https://github.com/YNU-PengShi/SemEval-2024-Task5.
[ "Shi, Peng", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at SemEval-2024 Task 5: Regularized Legal-BERT for Legal Argument Reasoning Task in Civil Procedure
semeval-1.108
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.109.bib
https://aclanthology.org/2024.semeval-1.109/
@inproceedings{v-etal-2024-techssn, title = "{TECHSSN} at {S}em{E}val-2024 Task 10: {LSTM}-based Approach for Emotion Detection in Multilingual Code-Mixed Conversations", author = "V, Ravindran and Babu G, Shreejith and Jetti, Aashika and Sivanaiah, Rajalakshmi and Deborah, Angel and Thankanadar, Mirnalinee and R S, Milton", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.109", doi = "10.18653/v1/2024.semeval-1.109", pages = "763--769", abstract = "Emotion Recognition in Conversation (ERC) in the context of code-mixed Hindi-English interactions is a subtask addressed in SemEval-2024 as Task 10. We made our maiden attempt to solve the problem using natural language processing, machine learning and deep learning techniques, that perform well in properly assigning emotions to individual utterances from a predefined collection. The use of well-proven classifier such as Long Short Term Memory networks improve the model{'}s efficacy than the BERT and Glove based models. How-ever, difficulties develop in the subtle arena of emotion-flip reasoning in multi-party discussions, emphasizing the importance of specialized methodologies. Our findings shed light on the intricacies of emotion dynamics in code-mixed languages, pointing to potential areas for further research and refinement in multilingual understanding.", }
Emotion Recognition in Conversation (ERC) in the context of code-mixed Hindi-English interactions is a subtask addressed in SemEval-2024 as Task 10. We made our maiden attempt to solve the problem using natural language processing, machine learning and deep learning techniques, that perform well in properly assigning emotions to individual utterances from a predefined collection. The use of well-proven classifier such as Long Short Term Memory networks improve the model{'}s efficacy than the BERT and Glove based models. How-ever, difficulties develop in the subtle arena of emotion-flip reasoning in multi-party discussions, emphasizing the importance of specialized methodologies. Our findings shed light on the intricacies of emotion dynamics in code-mixed languages, pointing to potential areas for further research and refinement in multilingual understanding.
[ "V, Ravindran", "Babu G, Shreejith", "Jetti, Aashika", "Sivanaiah, Rajalakshmi", "Deborah, Angel", "Thankanadar, Mirnalinee", "R S, Milton" ]
TECHSSN at SemEval-2024 Task 10: LSTM-based Approach for Emotion Detection in Multilingual Code-Mixed Conversations
semeval-1.109
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.110.bib
https://aclanthology.org/2024.semeval-1.110/
@inproceedings{guo-etal-2024-uir, title = "{UIR}-{ISC} at {S}em{E}val-2024 Task 3: Textual Emotion-Cause Pair Extraction in Conversations", author = "Guo, Hongyu and Zhang, Xueyao and Chen, Yiyang and Deng, Lin and Li, Binyang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.110", doi = "10.18653/v1/2024.semeval-1.110", pages = "770--776", abstract = "The goal of Emotion Cause Pair Extraction (ECPE) is to explore the causes of emotion changes and what causes a certain emotion. This paper proposes a three-step learning approach for the task of Textual Emotion-Cause Pair Extraction in Conversations in SemEval-2024 Task 3, named ECSP. We firstly perform data preprocessing operations on the original dataset to construct negative samples. Secondly, we use a pre-trained model to construct token sequence representations with contextual information to obtain emotion prediction. Thirdly, we regard the textual emotion-cause pair extraction task as a machine reading comprehension task, and fine-tune two pre-trained models, RoBERTa and SpanBERT. Our results have achieved good results in the official rankings, ranking 3rd under the strict match with the Strict F1-score of 15.18{\%}, which further shows that our system has a robust performance.", }
The goal of Emotion Cause Pair Extraction (ECPE) is to explore the causes of emotion changes and what causes a certain emotion. This paper proposes a three-step learning approach for the task of Textual Emotion-Cause Pair Extraction in Conversations in SemEval-2024 Task 3, named ECSP. We firstly perform data preprocessing operations on the original dataset to construct negative samples. Secondly, we use a pre-trained model to construct token sequence representations with contextual information to obtain emotion prediction. Thirdly, we regard the textual emotion-cause pair extraction task as a machine reading comprehension task, and fine-tune two pre-trained models, RoBERTa and SpanBERT. Our results have achieved good results in the official rankings, ranking 3rd under the strict match with the Strict F1-score of 15.18{\%}, which further shows that our system has a robust performance.
[ "Guo, Hongyu", "Zhang, Xueyao", "Chen, Yiyang", "Deng, Lin", "Li, Binyang" ]
UIR-ISC at SemEval-2024 Task 3: Textual Emotion-Cause Pair Extraction in Conversations
semeval-1.110
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.111.bib
https://aclanthology.org/2024.semeval-1.111/
@inproceedings{liang-etal-2024-ynu, title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task10: Pre-trained Language Model for Emotion Discovery and Reasoning its Flip in Conversation", author = "Liang, Chenyi and Wang, Jin and Zhang, Xuejie", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.111", doi = "10.18653/v1/2024.semeval-1.111", pages = "777--784", abstract = "This paper describes the application of fine-tuning pre-trained models for SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF), which requires the prediction of emotions for each utterance in a conversation and the identification of sentences where an emotional flip occurs. This model is built on the DeBERTa transformer model and enhanced for emotion detection and flip reasoning in conversations. It employs specific separators for utterance processing and utilizes specific padding to handle variable-length inputs. Methods such as R-drop, back translation, and focalloss are also employed in the training of my model. The model achieved specific results on the competition{'}s official leaderboard. The code of this paper is available athttps://github.com/jiaowoobjiuhao/SemEval-2024-task10.", }
This paper describes the application of fine-tuning pre-trained models for SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF), which requires the prediction of emotions for each utterance in a conversation and the identification of sentences where an emotional flip occurs. This model is built on the DeBERTa transformer model and enhanced for emotion detection and flip reasoning in conversations. It employs specific separators for utterance processing and utilizes specific padding to handle variable-length inputs. Methods such as R-drop, back translation, and focalloss are also employed in the training of my model. The model achieved specific results on the competition{'}s official leaderboard. The code of this paper is available athttps://github.com/jiaowoobjiuhao/SemEval-2024-task10.
[ "Liang, Chenyi", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at SemEval-2024 Task10: Pre-trained Language Model for Emotion Discovery and Reasoning its Flip in Conversation
semeval-1.111
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.112.bib
https://aclanthology.org/2024.semeval-1.112/
@inproceedings{zhang-etal-2024-ynu, title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task 2: Applying {D}e{BERT}a-v3-large to Safe Biomedical Natural Language Inference for Clinical Trials", author = "Zhang, Rengui and Wang, Jin and Zhang, Xuejie", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.112", doi = "10.18653/v1/2024.semeval-1.112", pages = "785--791", abstract = "This paper describes the system for the YNU-HPCC team for SemEval2024 Task 2, focusing on Safe Biomedical Natural Language Inference for Clinical Trials. The core challenge of this task lies in discerning the textual entailment relationship between Clinical Trial Reports (CTR) and statements annotated by expert annotators, including the necessity to infer the relationships in texts subjected to semantic interventions accurately. Our approach leverages a fine-tuned DeBERTa-v3-large model augmented with supervised contrastive learning and back-translation techniques. Supervised contrastive learning aims to bolster classification ac-curacy while back-translation enriches the diversity and quality of our training corpus. Our method achieves a decent F1 score. However, the results also indicate a need for further en-hancements in the system{'}s capacity for deep semantic comprehension, highlighting areas for future refinement. The code of this paper is available at:https://github.com/RGTnuw/RG{\_}YNU-HPCC-at-Semeval2024-Task2.", }
This paper describes the system for the YNU-HPCC team for SemEval2024 Task 2, focusing on Safe Biomedical Natural Language Inference for Clinical Trials. The core challenge of this task lies in discerning the textual entailment relationship between Clinical Trial Reports (CTR) and statements annotated by expert annotators, including the necessity to infer the relationships in texts subjected to semantic interventions accurately. Our approach leverages a fine-tuned DeBERTa-v3-large model augmented with supervised contrastive learning and back-translation techniques. Supervised contrastive learning aims to bolster classification ac-curacy while back-translation enriches the diversity and quality of our training corpus. Our method achieves a decent F1 score. However, the results also indicate a need for further en-hancements in the system{'}s capacity for deep semantic comprehension, highlighting areas for future refinement. The code of this paper is available at:https://github.com/RGTnuw/RG{\_}YNU-HPCC-at-Semeval2024-Task2.
[ "Zhang, Rengui", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at SemEval-2024 Task 2: Applying DeBERTa-v3-large to Safe Biomedical Natural Language Inference for Clinical Trials
semeval-1.112
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.113.bib
https://aclanthology.org/2024.semeval-1.113/
@inproceedings{li-etal-2024-ynu, title = "{YNU}-{HPCC} at {S}em{E}val-2024 Task 1: Self-Instruction Learning with Black-box Optimization for Semantic Textual Relatedness", author = "Li, Weijie and Wang, Jin and Zhang, Xuejie", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.113", doi = "10.18653/v1/2024.semeval-1.113", pages = "792--799", abstract = "This paper introduces a system designed for SemEval-2024 Task 1 that focuses on assessing Semantic Textual Relatedness (STR) between sentence pairs, including its multilingual version. STR, which evaluates the coherence of sentences, is distinct from Semantic Textual Similarity (STS). However, Large Language Models (LLMs) such as ERNIE-Bot-turbo, typically trained on STS data, often struggle to differentiate between the two concepts. To address this, we developed a self-instruction method that enhances their performance distinguishing STR, particularly in cases with high STS but low STR. Beginning with a task description, the system generates new task instructions refined through human feedback. It then iteratively enhances these instructions by comparing them to the original and evaluating the differences. Utilizing the Large Language Models{'} (LLMs) natural language comprehension abilities, the system aims to produce progressively optimized instructions based on the resulting scores. Through our optimized instructions, ERNIE-Bot-turbo exceeds the performance of conventional models,achieving a score enhancement of 4 to 7{\%} on multilingual development datasets.", }
This paper introduces a system designed for SemEval-2024 Task 1 that focuses on assessing Semantic Textual Relatedness (STR) between sentence pairs, including its multilingual version. STR, which evaluates the coherence of sentences, is distinct from Semantic Textual Similarity (STS). However, Large Language Models (LLMs) such as ERNIE-Bot-turbo, typically trained on STS data, often struggle to differentiate between the two concepts. To address this, we developed a self-instruction method that enhances their performance distinguishing STR, particularly in cases with high STS but low STR. Beginning with a task description, the system generates new task instructions refined through human feedback. It then iteratively enhances these instructions by comparing them to the original and evaluating the differences. Utilizing the Large Language Models{'} (LLMs) natural language comprehension abilities, the system aims to produce progressively optimized instructions based on the resulting scores. Through our optimized instructions, ERNIE-Bot-turbo exceeds the performance of conventional models,achieving a score enhancement of 4 to 7{\%} on multilingual development datasets.
[ "Li, Weijie", "Wang, Jin", "Zhang, Xuejie" ]
YNU-HPCC at SemEval-2024 Task 1: Self-Instruction Learning with Black-box Optimization for Semantic Textual Relatedness
semeval-1.113
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.114.bib
https://aclanthology.org/2024.semeval-1.114/
@inproceedings{zhang-etal-2024-aadam, title = "{AA}da{M} at {S}em{E}val-2024 Task 1: Augmentation and Adaptation for Multilingual Semantic Textual Relatedness", author = "Zhang, Miaoran and Wang, Mingyang and Alabi, Jesujoba and Klakow, Dietrich", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.114", doi = "10.18653/v1/2024.semeval-1.114", pages = "800--810", abstract = "This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian Languages. The shared task aims at measuring the semantic textual relatedness between pairs of sentences, with a focus on a range of under-represented languages. In this work, we propose using machine translation for data augmentation to address the low-resource challenge of limited training data. Moreover, we apply task-adaptive pre-training on unlabeled task data to bridge the gap between pre-training and task adaptation. For model training, we investigate both full fine-tuning and adapter-based tuning, and adopt the adapter framework for effective zero-shot cross-lingual transfer. We achieve competitive results in the shared task: our system performs the best among all ranked teams in both subtask A (supervised learning) and subtask C (cross-lingual transfer).", }
This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian Languages. The shared task aims at measuring the semantic textual relatedness between pairs of sentences, with a focus on a range of under-represented languages. In this work, we propose using machine translation for data augmentation to address the low-resource challenge of limited training data. Moreover, we apply task-adaptive pre-training on unlabeled task data to bridge the gap between pre-training and task adaptation. For model training, we investigate both full fine-tuning and adapter-based tuning, and adopt the adapter framework for effective zero-shot cross-lingual transfer. We achieve competitive results in the shared task: our system performs the best among all ranked teams in both subtask A (supervised learning) and subtask C (cross-lingual transfer).
[ "Zhang, Miaoran", "Wang, Mingyang", "Alabi, Jesujoba", "Klakow, Dietrich" ]
AAdaM at SemEval-2024 Task 1: Augmentation and Adaptation for Multilingual Semantic Textual Relatedness
semeval-1.114
Poster
2404.01490
[ "https://github.com/uds-lsv/aadam" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.115.bib
https://aclanthology.org/2024.semeval-1.115/
@inproceedings{venkatesh-etal-2024-bits, title = "{BITS} Pilani at {S}em{E}val-2024 Task 10: Fine-tuning {BERT} and Llama 2 for Emotion Recognition in Conversation", author = "Venkatesh, Dilip and Prasanjith, Pasunti and Sharma, Yashvardhan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.115", doi = "10.18653/v1/2024.semeval-1.115", pages = "811--815", abstract = "Emotion Recognition in Conversation (ERC)aims to assign an emotion to a dialogue in aconversation between people. The first subtaskof EDiReF shared task aims to assign an emo-tions to a Hindi-English code mixed conversa-tion. For this, our team proposes a system toidentify the emotion based on fine-tuning largelanguage models on the MaSaC dataset. Forour study we have fine tuned 2 LLMs BERTand Llama 2 to perform sequence classificationto identify the emotion of the text.", }
Emotion Recognition in Conversation (ERC)aims to assign an emotion to a dialogue in aconversation between people. The first subtaskof EDiReF shared task aims to assign an emo-tions to a Hindi-English code mixed conversa-tion. For this, our team proposes a system toidentify the emotion based on fine-tuning largelanguage models on the MaSaC dataset. Forour study we have fine tuned 2 LLMs BERTand Llama 2 to perform sequence classificationto identify the emotion of the text.
[ "Venkatesh, Dilip", "Prasanjith, Pasunti", "Sharma, Yashvardhan" ]
BITS Pilani at SemEval-2024 Task 10: Fine-tuning BERT and Llama 2 for Emotion Recognition in Conversation
semeval-1.115
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.116.bib
https://aclanthology.org/2024.semeval-1.116/
@inproceedings{venkatesh-sharma-2024-bits, title = "{BITS} Pilani at {S}em{E}val-2024 Task 9: Prompt Engineering with {GPT}-4 for Solving Brainteasers", author = "Venkatesh, Dilip and Sharma, Yashvardhan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.116", doi = "10.18653/v1/2024.semeval-1.116", pages = "816--820", abstract = "Solving brainteasers is a task that requires complex reasoning prowess. The increase of research in natural language processing has leadto the development of massive large languagemodels with billions (or trillions) of parameters that are able to solve difficult questionsdue to their advanced reasoning capabilities.The SemEval BRAINTEASER shared tasks consists of sentence and word puzzles along withoptions containing the answer for the puzzle.Our team uses OpenAI{'}s GPT-4 model alongwith prompt engineering to solve these brainteasers.", }
Solving brainteasers is a task that requires complex reasoning prowess. The increase of research in natural language processing has leadto the development of massive large languagemodels with billions (or trillions) of parameters that are able to solve difficult questionsdue to their advanced reasoning capabilities.The SemEval BRAINTEASER shared tasks consists of sentence and word puzzles along withoptions containing the answer for the puzzle.Our team uses OpenAI{'}s GPT-4 model alongwith prompt engineering to solve these brainteasers.
[ "Venkatesh, Dilip", "Sharma, Yashvardhan" ]
BITS Pilani at SemEval-2024 Task 9: Prompt Engineering with GPT-4 for Solving Brainteasers
semeval-1.116
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.117.bib
https://aclanthology.org/2024.semeval-1.117/
@inproceedings{r-etal-2024-bridging, title = "Bridging Numerical Reasoning and Headline Generation for Enhanced Language Models", author = "R, Vaishnavi and T, Srimathi and S, Aarthi and V, Harini", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.117", doi = "10.18653/v1/2024.semeval-1.117", pages = "821--828", abstract = "Headline generation becomes a vital tool in the dynamic world of digital media, combining creativity and scientific rigor to engage readers while maintaining accuracy. However, accuracy is currently hampered by numerical integration problems, which affect both abstractive and extractive approaches. Sentences that are extracted from the original material are typically too short to accurately represent complex information. Our research introduces an innovative two-step training technique to tackle these problems, emphasizing the significance of enhanced numerical reasoning in headline development. Promising advances are presented by utilizing text-to-text processing capabilities of the T5 model and advanced NLP approaches like BERT and RoBERTa. With the help of external contributions and our dataset, our Flan-T5 model has been improved to demonstrate how these methods may be used to overcome numerical integration issues and improve the accuracy of headline production.", }
Headline generation becomes a vital tool in the dynamic world of digital media, combining creativity and scientific rigor to engage readers while maintaining accuracy. However, accuracy is currently hampered by numerical integration problems, which affect both abstractive and extractive approaches. Sentences that are extracted from the original material are typically too short to accurately represent complex information. Our research introduces an innovative two-step training technique to tackle these problems, emphasizing the significance of enhanced numerical reasoning in headline development. Promising advances are presented by utilizing text-to-text processing capabilities of the T5 model and advanced NLP approaches like BERT and RoBERTa. With the help of external contributions and our dataset, our Flan-T5 model has been improved to demonstrate how these methods may be used to overcome numerical integration issues and improve the accuracy of headline production.
[ "R, Vaishnavi", "T, Srimathi", "S, Aarthi", "V, Harini" ]
Bridging Numerical Reasoning and Headline Generation for Enhanced Language Models
semeval-1.117
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.118.bib
https://aclanthology.org/2024.semeval-1.118/
@inproceedings{pickard-do-2024-tuesents, title = "{T}ue{S}ents at {S}em{E}val-2024 Task 8: Predicting the Shift from Human Authorship to Machine-generated Output in a Mixed Text", author = "Pickard, Valentin and Do, Hoa", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.118", doi = "10.18653/v1/2024.semeval-1.118", pages = "829--832", abstract = "This paper describes our approach and resultsfor the SemEval 2024 task of identifying thetoken index in a mixed text where a switchfrom human authorship to machine-generatedtext occurs. We explore two BiLSTMs, oneover sentence feature vectors to predict theindex of the sentence containing such a changeand another over character embeddings of thetext. As sentence features, we compute tokencount, mean token length, standard deviationof token length, counts for punctuation andspace characters, various readability scores,word frequency class and word part-of-speechclass counts for each sentence. class counts.The evaluation is performed on mean absoluteerror (MAE) between predicted and actualboundary token index. While our competitionresults were notably below the baseline, theremay still be useful aspects to our approach.", }
This paper describes our approach and resultsfor the SemEval 2024 task of identifying thetoken index in a mixed text where a switchfrom human authorship to machine-generatedtext occurs. We explore two BiLSTMs, oneover sentence feature vectors to predict theindex of the sentence containing such a changeand another over character embeddings of thetext. As sentence features, we compute tokencount, mean token length, standard deviationof token length, counts for punctuation andspace characters, various readability scores,word frequency class and word part-of-speechclass counts for each sentence. class counts.The evaluation is performed on mean absoluteerror (MAE) between predicted and actualboundary token index. While our competitionresults were notably below the baseline, theremay still be useful aspects to our approach.
[ "Pickard, Valentin", "Do, Hoa" ]
TueSents at SemEval-2024 Task 8: Predicting the Shift from Human Authorship to Machine-generated Output in a Mixed Text
semeval-1.118
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.119.bib
https://aclanthology.org/2024.semeval-1.119/
@inproceedings{yenumulapalli-etal-2024-techssn1, title = "{TECHSSN}1 at {S}em{E}val-2024 Task 10: Emotion Classification in {H}indi-{E}nglish Code-Mixed Dialogue using Transformer-based Models", author = "Yenumulapalli, Venkatasai Ojus and Premnath, Pooja and Mohankumar, Parthiban and Sivanaiah, Rajalakshmi and Deborah, Angel", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.119", doi = "10.18653/v1/2024.semeval-1.119", pages = "833--838", abstract = "The increase in the popularity of code mixed languages has resulted in the need to engineer language models for the same . Unlike pure languages, code-mixed languages lack clear grammatical structures, leading to ambiguous sentence constructions. This ambiguity presents significant challenges for natural language processing tasks, including syntactic parsing, word sense disambiguation, and language identification. This paper focuses on emotion recognition of conversations in Hinglish, a mix of Hindi and English, as part of Task 10 of SemEval 2024. The proposed approach explores the usage of standard machine learning models like SVM, MNB and RF, and also BERT-based models for Hindi-English code-mixed data- namely, HingBERT, Hing mBERT and HingRoBERTa for subtask A.", }
The increase in the popularity of code mixed languages has resulted in the need to engineer language models for the same . Unlike pure languages, code-mixed languages lack clear grammatical structures, leading to ambiguous sentence constructions. This ambiguity presents significant challenges for natural language processing tasks, including syntactic parsing, word sense disambiguation, and language identification. This paper focuses on emotion recognition of conversations in Hinglish, a mix of Hindi and English, as part of Task 10 of SemEval 2024. The proposed approach explores the usage of standard machine learning models like SVM, MNB and RF, and also BERT-based models for Hindi-English code-mixed data- namely, HingBERT, Hing mBERT and HingRoBERTa for subtask A.
[ "Yenumulapalli, Venkatasai Ojus", "Premnath, Pooja", "Mohankumar, Parthiban", "Sivanaiah, Rajalakshmi", "Deborah, Angel" ]
TECHSSN1 at SemEval-2024 Task 10: Emotion Classification in Hindi-English Code-Mixed Dialogue using Transformer-based Models
semeval-1.119
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.120.bib
https://aclanthology.org/2024.semeval-1.120/
@inproceedings{allen-etal-2024-shroom, title = "{SHROOM}-{INDE}lab at {S}em{E}val-2024 Task 6: Zero- and Few-Shot {LLM}-Based Classification for Hallucination Detection", author = "Allen, Bradley and Polat, Fina and Groth, Paul", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.120", doi = "10.18653/v1/2024.semeval-1.120", pages = "839--844", abstract = "We describe the University of Amsterdam Intelligent Data Engineering Lab team{'}s entry for the SemEval-2024 Task 6 competition. The SHROOM-INDElab system builds on previous work on using prompt programming and in-context learning with large language models (LLMs) to build classifiers for hallucination detection, and extends that work through the incorporation of context-specific definition of task, role, and target concept, and automated generation of examples for use in a few-shot prompting approach. The resulting system achieved fourth-best and sixth-best performance in the model-agnostic track and model-aware tracks for Task 6, respectively, and evaluation using the validation sets showed that the system{'}s classification decisions were consistent with those of the crowdsourced human labelers. We further found that a zero-shot approach provided better accuracy than a few-shot approach using automatically generated examples. Code for the system described in this paper is available on Github.", }
We describe the University of Amsterdam Intelligent Data Engineering Lab team{'}s entry for the SemEval-2024 Task 6 competition. The SHROOM-INDElab system builds on previous work on using prompt programming and in-context learning with large language models (LLMs) to build classifiers for hallucination detection, and extends that work through the incorporation of context-specific definition of task, role, and target concept, and automated generation of examples for use in a few-shot prompting approach. The resulting system achieved fourth-best and sixth-best performance in the model-agnostic track and model-aware tracks for Task 6, respectively, and evaluation using the validation sets showed that the system{'}s classification decisions were consistent with those of the crowdsourced human labelers. We further found that a zero-shot approach provided better accuracy than a few-shot approach using automatically generated examples. Code for the system described in this paper is available on Github.
[ "Allen, Bradley", "Polat, Fina", "Groth, Paul" ]
SHROOM-INDElab at SemEval-2024 Task 6: Zero- and Few-Shot LLM-Based Classification for Hallucination Detection
semeval-1.120
Poster
2404.03732
[ "https://github.com/bradleypallen/shroom" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.121.bib
https://aclanthology.org/2024.semeval-1.121/
@inproceedings{rodero-pena-etal-2024-i2c, title = "{I}2{C}-{H}uelva at {S}em{E}val-2024 Task 8: Boosting {AI}-Generated Text Detection with Multimodal Models and Optimized Ensembles", author = "Rodero Pe{\~n}a, Alberto and Mata Vazquez, Jacinto and Pach{\'o}n {\'A}lvarez, Victoria", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.121", doi = "10.18653/v1/2024.semeval-1.121", pages = "845--852", abstract = "With the rise of AI-based text generators, the need for effective detection mechanisms has become paramount. This paper presents new techniques for building adaptable models and optimizing training aspects for identifying synthetically produced texts across multiple generators and domains. The study, divided into binary and multilabel classification tasks, avoids overfitting through strategic training data limitation. A key innovation is the incorporation of multimodal models that blend numerical text features with conventional NLP approaches. The work also delves into optimizing ensemble model combinations via various voting methods, focusing on accuracy as the official metric. The optimized ensemble strategy demonstrates significant efficacy in both subtasks, highlighting the potential of multimodal and ensemble methods in enhancing the robustness of detection systems against emerging text generators.", }
With the rise of AI-based text generators, the need for effective detection mechanisms has become paramount. This paper presents new techniques for building adaptable models and optimizing training aspects for identifying synthetically produced texts across multiple generators and domains. The study, divided into binary and multilabel classification tasks, avoids overfitting through strategic training data limitation. A key innovation is the incorporation of multimodal models that blend numerical text features with conventional NLP approaches. The work also delves into optimizing ensemble model combinations via various voting methods, focusing on accuracy as the official metric. The optimized ensemble strategy demonstrates significant efficacy in both subtasks, highlighting the potential of multimodal and ensemble methods in enhancing the robustness of detection systems against emerging text generators.
[ "Rodero Pe{\\~n}a, Alberto", "Mata Vazquez, Jacinto", "Pach{\\'o}n {\\'A}lvarez, Victoria" ]
I2C-Huelva at SemEval-2024 Task 8: Boosting AI-Generated Text Detection with Multimodal Models and Optimized Ensembles
semeval-1.121
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.122.bib
https://aclanthology.org/2024.semeval-1.122/
@inproceedings{zedda-etal-2024-snarci, title = "Snarci at {S}em{E}val-2024 Task 4: Themis Model for Binary Classification of Memes", author = "Zedda, Luca and Perniciano, Alessandra and Loddo, Andrea and Di Ruberto, Cecilia and Sanguinetti, Manuela and Atzori, Maurizio", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.122", doi = "10.18653/v1/2024.semeval-1.122", pages = "853--858", abstract = "This paper introduces an approach developed for multimodal meme analysis, specifically targeting the identification of persuasion techniques embedded within memes. Our methodology integrates Large Language Models (LLMs) and contrastive learning image encoders to discern the presence of persuasive elements in memes across diverse platforms. By capitalizing on the contextual understanding facilitated by LLMs and the discriminative power of contrastive learning for image encoding, our framework provides a robust solution for detecting and classifying memes with persuasion techniques. The system was used in Task 4 of Semeval 2024, precisely for Substask 2b (binary classification of presence of persuasion techniques). It showed promising results overall, achieving a Macro-F1=0.7986 on the English test data (i.e., the language the system was trained on) and Macro-F1=0.66777/0.47917/0.5554, respectively, on the other three {``}surprise{''} languages proposed by the task organizers, i.e., Bulgarian, North Macedonian and Arabic. The paper provides an overview of the system, along with a discussion of the results obtained and its main limitations.", }
This paper introduces an approach developed for multimodal meme analysis, specifically targeting the identification of persuasion techniques embedded within memes. Our methodology integrates Large Language Models (LLMs) and contrastive learning image encoders to discern the presence of persuasive elements in memes across diverse platforms. By capitalizing on the contextual understanding facilitated by LLMs and the discriminative power of contrastive learning for image encoding, our framework provides a robust solution for detecting and classifying memes with persuasion techniques. The system was used in Task 4 of Semeval 2024, precisely for Substask 2b (binary classification of presence of persuasion techniques). It showed promising results overall, achieving a Macro-F1=0.7986 on the English test data (i.e., the language the system was trained on) and Macro-F1=0.66777/0.47917/0.5554, respectively, on the other three {``}surprise{''} languages proposed by the task organizers, i.e., Bulgarian, North Macedonian and Arabic. The paper provides an overview of the system, along with a discussion of the results obtained and its main limitations.
[ "Zedda, Luca", "Perniciano, Aless", "ra", "Loddo, Andrea", "Di Ruberto, Cecilia", "Sanguinetti, Manuela", "Atzori, Maurizio" ]
Snarci at SemEval-2024 Task 4: Themis Model for Binary Classification of Memes
semeval-1.122
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.123.bib
https://aclanthology.org/2024.semeval-1.123/
@inproceedings{shanto-etal-2024-fired, title = "{F}ired{\_}from{\_}{NLP} at {S}em{E}val-2024 Task 1: Towards Developing Semantic Textual Relatedness Predictor - A Transformer-based Approach", author = "Shanto, Anik and Chowdhury, Md. Sajid Alam and Chowdhury, Mostak and Das, Udoy and Murad, Hasan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.123", doi = "10.18653/v1/2024.semeval-1.123", pages = "859--864", abstract = "Predicting semantic textual relatedness (STR) is one of the most challenging tasks in the field of natural language processing. Semantic relatedness prediction has real-life practical applications while developing search engines and modern text generation systems. A shared task on semantic textual relatedness has been organized by SemEval 2024, where the organizer has proposed a dataset on semantic textual relatedness in the English language under Shared Task 1 (Track A3). In this work, we have developed models to predict semantic textual relatedness between pairs of English sentences by training and evaluating various transformer-based model architectures, deep learning, and machine learning methods using the shared dataset. Moreover, we have utilized existing semantic textual relatedness datasets such as the stsb multilingual benchmark dataset, the SemEval 2014 Task 1 dataset, and the SemEval 2015 Task 2 dataset. Our findings show that in the SemEval 2024 Shared Task 1 (Track A3), the fine-tuned-STS-BERT model performed the best, scoring 0.8103 on the test set and placing 25th out of all participants.", }
Predicting semantic textual relatedness (STR) is one of the most challenging tasks in the field of natural language processing. Semantic relatedness prediction has real-life practical applications while developing search engines and modern text generation systems. A shared task on semantic textual relatedness has been organized by SemEval 2024, where the organizer has proposed a dataset on semantic textual relatedness in the English language under Shared Task 1 (Track A3). In this work, we have developed models to predict semantic textual relatedness between pairs of English sentences by training and evaluating various transformer-based model architectures, deep learning, and machine learning methods using the shared dataset. Moreover, we have utilized existing semantic textual relatedness datasets such as the stsb multilingual benchmark dataset, the SemEval 2014 Task 1 dataset, and the SemEval 2015 Task 2 dataset. Our findings show that in the SemEval 2024 Shared Task 1 (Track A3), the fine-tuned-STS-BERT model performed the best, scoring 0.8103 on the test set and placing 25th out of all participants.
[ "Shanto, Anik", "Chowdhury, Md. Sajid Alam", "Chowdhury, Mostak", "Das, Udoy", "Murad, Hasan" ]
Fired_from_NLP at SemEval-2024 Task 1: Towards Developing Semantic Textual Relatedness Predictor - A Transformer-based Approach
semeval-1.123
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.124.bib
https://aclanthology.org/2024.semeval-1.124/
@inproceedings{venkatesh-raman-2024-bits, title = "{BITS} Pilani at {S}em{E}val-2024 Task 1: Using text-embedding-3-large and {L}a{BSE} embeddings for Semantic Textual Relatedness", author = "Venkatesh, Dilip and Raman, Sundaresan", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.124", doi = "10.18653/v1/2024.semeval-1.124", pages = "865--868", abstract = "Semantic Relatedness of a pair of text (sentences or words) is the degree to which theirmeanings are close. The Track A of the Semantic Textual Relatedness shared task aimsto find the semantic relatedness for the English language along with multiple other lowresource languages with the use of pretrainedlanguage models. We proposes a system tofind the Spearman coefficient of a textual pairusing pretrained embedding models like textembedding-3-large and LaBSE.", }
Semantic Relatedness of a pair of text (sentences or words) is the degree to which theirmeanings are close. The Track A of the Semantic Textual Relatedness shared task aimsto find the semantic relatedness for the English language along with multiple other lowresource languages with the use of pretrainedlanguage models. We proposes a system tofind the Spearman coefficient of a textual pairusing pretrained embedding models like textembedding-3-large and LaBSE.
[ "Venkatesh, Dilip", "Raman, Sundaresan" ]
BITS Pilani at SemEval-2024 Task 1: Using text-embedding-3-large and LaBSE embeddings for Semantic Textual Relatedness
semeval-1.124
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.125.bib
https://aclanthology.org/2024.semeval-1.125/
@inproceedings{rykov-etal-2024-smurfcat, title = "{S}murf{C}at at {S}em{E}val-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection", author = "Rykov, Elisei and Shishkina, Yana and Petrushina, Ksenia and Titova, Ksenia and Petrakov, Sergey and Panchenko, Alexander", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.125", doi = "10.18653/v1/2024.semeval-1.125", pages = "869--880", abstract = "In this paper, we present our novel systems developed for the SemEval-2024 hallucination detection task. Our investigation spans a range of strategies to compare model predictions with reference standards, encompassing diverse baselines, the refinement of pre-trained encoders through supervised learning, and an ensemble approaches utilizing several high-performing models. Through these explorations, we introduce three distinct methods that exhibit strong performance metrics. To amplify our training data, we generate additional training samples from unlabelled training subset. Furthermore, we provide a detailed comparative analysis of our approaches. Notably, our premier method achieved a commendable 9th place in the competition{'}s model-agnostic track and 20th place in model-aware track, highlighting its effectiveness and potential.", }
In this paper, we present our novel systems developed for the SemEval-2024 hallucination detection task. Our investigation spans a range of strategies to compare model predictions with reference standards, encompassing diverse baselines, the refinement of pre-trained encoders through supervised learning, and an ensemble approaches utilizing several high-performing models. Through these explorations, we introduce three distinct methods that exhibit strong performance metrics. To amplify our training data, we generate additional training samples from unlabelled training subset. Furthermore, we provide a detailed comparative analysis of our approaches. Notably, our premier method achieved a commendable 9th place in the competition{'}s model-agnostic track and 20th place in model-aware track, highlighting its effectiveness and potential.
[ "Rykov, Elisei", "Shishkina, Yana", "Petrushina, Ksenia", "Titova, Ksenia", "Petrakov, Sergey", "Panchenko, Alex", "er" ]
SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection
semeval-1.125
Poster
2404.06137
[ "https://github.com/s-nlp/shroom" ]
https://huggingface.co/papers/2404.06137
1
0
0
6
1
[]
[]
[]
https://aclanthology.org/2024.semeval-1.126.bib
https://aclanthology.org/2024.semeval-1.126/
@inproceedings{li-etal-2024-ustcctsu, title = "{USTCCTSU} at {S}em{E}val-2024 Task 1: Reducing Anisotropy for Cross-lingual Semantic Textual Relatedness Task", author = "Li, Jianjian and Liang, Shengwei and Liao, Yong and Deng, Hongping and Yu, Haiyang", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.126", doi = "10.18653/v1/2024.semeval-1.126", pages = "881--887", abstract = "Cross-lingual semantic textual relatedness task is an important research task that addresses challenges in cross-lingual communication and text understanding. It helps establish semantic connections between different languages, crucial for downstream tasks like machine translation, multilingual information retrieval, and cross-lingual text understanding.Based on extensive comparative experiments, we choose the XLM-R-base as our base model and use pre-trained sentence representations based on whitening to reduce anisotropy.Additionally, for the given training data, we design a delicate data filtering method to alleviate the curse of multilingualism. With our approach, we achieve a 2nd score in Spanish, a 3rd in Indonesian, and multiple entries in the top ten results in the competition{'}s track C. We further do a comprehensive analysis to inspire future research aimed at improving performance on cross-lingual tasks.", }
Cross-lingual semantic textual relatedness task is an important research task that addresses challenges in cross-lingual communication and text understanding. It helps establish semantic connections between different languages, crucial for downstream tasks like machine translation, multilingual information retrieval, and cross-lingual text understanding.Based on extensive comparative experiments, we choose the XLM-R-base as our base model and use pre-trained sentence representations based on whitening to reduce anisotropy.Additionally, for the given training data, we design a delicate data filtering method to alleviate the curse of multilingualism. With our approach, we achieve a 2nd score in Spanish, a 3rd in Indonesian, and multiple entries in the top ten results in the competition{'}s track C. We further do a comprehensive analysis to inspire future research aimed at improving performance on cross-lingual tasks.
[ "Li, Jianjian", "Liang, Shengwei", "Liao, Yong", "Deng, Hongping", "Yu, Haiyang" ]
USTCCTSU at SemEval-2024 Task 1: Reducing Anisotropy for Cross-lingual Semantic Textual Relatedness Task
semeval-1.126
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.127.bib
https://aclanthology.org/2024.semeval-1.127/
@inproceedings{roll-graham-2024-greybox, title = "{G}rey{B}ox at {S}em{E}val-2024 Task 4: Progressive Fine-tuning (for Multilingual Detection of Propaganda Techniques)", author = "Roll, Nathan and Graham, Calbert", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.127", doi = "10.18653/v1/2024.semeval-1.127", pages = "888--893", abstract = "We introduce a novel fine-tuning approach that effectively primes transformer-based language models to detect rhetorical and psychological techniques within internet memes. Our end-to-end system retains multilingual and task-general capacities from pretraining stages while adapting to domain intricacies using an increasingly targeted set of examples{--} achieving competitive rankings across English, Bulgarian, and North Macedonian. We find that our monolingual post-training regimen is sufficient to improve task performance in 17 language varieties beyond equivalent zero-shot capabilities despite English-only data. To promote further research, we release our code publicly on GitHub.", }
We introduce a novel fine-tuning approach that effectively primes transformer-based language models to detect rhetorical and psychological techniques within internet memes. Our end-to-end system retains multilingual and task-general capacities from pretraining stages while adapting to domain intricacies using an increasingly targeted set of examples{--} achieving competitive rankings across English, Bulgarian, and North Macedonian. We find that our monolingual post-training regimen is sufficient to improve task performance in 17 language varieties beyond equivalent zero-shot capabilities despite English-only data. To promote further research, we release our code publicly on GitHub.
[ "Roll, Nathan", "Graham, Calbert" ]
GreyBox at SemEval-2024 Task 4: Progressive Fine-tuning (for Multilingual Detection of Propaganda Techniques)
semeval-1.127
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.128.bib
https://aclanthology.org/2024.semeval-1.128/
@inproceedings{malaysha-etal-2024-nlu, title = "{NLU}-{STR} at {S}em{E}val-2024 Task 1: Generative-based Augmentation and Encoder-based Scoring for Semantic Textual Relatedness", author = "Malaysha, Sanad and Jarrar, Mustafa and Khalilia, Mohammed", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.128", doi = "10.18653/v1/2024.semeval-1.128", pages = "894--901", abstract = "Semantic textual relatedness is a broader concept of semantic similarity. It measures the extent to which two chunks of text convey similar meaning or topics, or share related concepts or contexts. This notion of relatedness can be applied in various applications, such as document clustering and summarizing. SemRel-2024, a shared task in SemEval-2024, aims at reducing the gap in the semantic relatedness task by providing datasets for fourteen languages and dialects including Arabic. This paper reports on our participation in Track A (Algerian and Moroccan dialects) and Track B (Modern Standard Arabic). A BERT-based model is augmented and fine-tuned for regression scoring in supervised track (A), while BERT-based cosine similarity is employed for unsupervised track (B). Our system ranked 1st in SemRel-2024 for MSA with a Spearman correlation score of 0.49. We ranked 5th for Moroccan and 12th for Algerian with scores of 0.83 and 0.53, respectively.", }
Semantic textual relatedness is a broader concept of semantic similarity. It measures the extent to which two chunks of text convey similar meaning or topics, or share related concepts or contexts. This notion of relatedness can be applied in various applications, such as document clustering and summarizing. SemRel-2024, a shared task in SemEval-2024, aims at reducing the gap in the semantic relatedness task by providing datasets for fourteen languages and dialects including Arabic. This paper reports on our participation in Track A (Algerian and Moroccan dialects) and Track B (Modern Standard Arabic). A BERT-based model is augmented and fine-tuned for regression scoring in supervised track (A), while BERT-based cosine similarity is employed for unsupervised track (B). Our system ranked 1st in SemRel-2024 for MSA with a Spearman correlation score of 0.49. We ranked 5th for Moroccan and 12th for Algerian with scores of 0.83 and 0.53, respectively.
[ "Malaysha, Sanad", "Jarrar, Mustafa", "Khalilia, Mohammed" ]
NLU-STR at SemEval-2024 Task 1: Generative-based Augmentation and Encoder-based Scoring for Semantic Textual Relatedness
semeval-1.128
Poster
2405.00659
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.semeval-1.129.bib
https://aclanthology.org/2024.semeval-1.129/
@inproceedings{kumar-kumar-2024-scalar, title = "sca{LAR} {S}em{E}val-2024 Task 1: Semantic Textual Relatednes for {E}nglish", author = "Kumar, Anand and Kumar, Hemanth", editor = {Ojha, Atul Kr. and Do{\u{g}}ru{\"o}z, A. Seza and Tayyar Madabushi, Harish and Da San Martino, Giovanni and Rosenthal, Sara and Ros{\'a}, Aiala}, booktitle = "Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.semeval-1.129", doi = "10.18653/v1/2024.semeval-1.129", pages = "902--906", abstract = "This study investigates Semantic TextualRelated- ness (STR) within Natural LanguageProcessing (NLP) through experiments conducted on a dataset from the SemEval-2024STR task. The dataset comprises train instances with three features (PairID, Text, andScore) and test instances with two features(PairID and Text), where sentence pairs areseparated by '/n{'} in the Text column. UsingBERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning(Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuningthe BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained usingcosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks,highlighting the role of pre-trained languagemodels like BERT and Sentence Transformersin enhancing semantic relatedness assessments.", }
This study investigates Semantic TextualRelated- ness (STR) within Natural LanguageProcessing (NLP) through experiments conducted on a dataset from the SemEval-2024STR task. The dataset comprises train instances with three features (PairID, Text, andScore) and test instances with two features(PairID and Text), where sentence pairs areseparated by '/n{'} in the Text column. UsingBERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning(Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuningthe BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained usingcosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks,highlighting the role of pre-trained languagemodels like BERT and Sentence Transformersin enhancing semantic relatedness assessments.
[ "Kumar, An", "", "Kumar, Hemanth" ]
scaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English
semeval-1.129
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]