bibtex_url
stringlengths 41
52
| proceedings
stringlengths 38
49
| bibtext
stringlengths 788
3.49k
| abstract
stringlengths 0
2.12k
| authors
sequencelengths 1
58
| title
stringlengths 16
181
| id
stringlengths 7
18
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 170
values | n_linked_authors
int64 -1
9
| upvotes
int64 -1
56
| num_comments
int64 -1
9
| n_authors
int64 -1
57
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
99
| Datasets
sequencelengths 0
5
| Spaces
sequencelengths 0
57
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2024.findings-naacl.139.bib | https://aclanthology.org/2024.findings-naacl.139/ | @inproceedings{yang-etal-2024-plug,
title = "Plug-in Language Model: Controlling Text Generation with a Simple Regression Model",
author = "Yang, Nai-Chi and
Ma, Wei-Yun and
Cheng, Pu-Jen",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.139",
doi = "10.18653/v1/2024.findings-naacl.139",
pages = "2165--2181",
abstract = "Large-scale pre-trained language models have displayed unrivaled capacity in generating text that closely resembles human-written text. Nevertheless, generating texts adhering to specific conditions without fine-tuning or adding new parameters can be challenging. Contemporary approaches commonly rely on either prompts or auxiliary models to avoid modifying the language models. These auxiliary models are designed to assess whether a generated token contributes to meeting the desired requirements. These approaches adjust the distribution of the next token during the inference phase by leveraging the prediction score of the desired attribute to calculate gradients. However, these auxiliary models typically require the language model{'}s latent states. This prerequisite challenges integrating various existing black box attribute models or tools. We present the Plug-in Language Model (PiLM) as a solution to address the limitations. PiLM leverages reinforcement learning to utilize black box tools directly, adjusting the latent state to control text generation. However, performing backpropagation during the inference phase is time-consuming for PiLM. By replacing backpropagation with a simple regression model, PiLM can achieve an inference time comparable to that of the original LLM. Experiment results show that our approaches in this paper outperform existing state-of-the-art methods that rely on gradient-based, weighted decoding, or prompt-based methodologies.",
}
| Large-scale pre-trained language models have displayed unrivaled capacity in generating text that closely resembles human-written text. Nevertheless, generating texts adhering to specific conditions without fine-tuning or adding new parameters can be challenging. Contemporary approaches commonly rely on either prompts or auxiliary models to avoid modifying the language models. These auxiliary models are designed to assess whether a generated token contributes to meeting the desired requirements. These approaches adjust the distribution of the next token during the inference phase by leveraging the prediction score of the desired attribute to calculate gradients. However, these auxiliary models typically require the language model{'}s latent states. This prerequisite challenges integrating various existing black box attribute models or tools. We present the Plug-in Language Model (PiLM) as a solution to address the limitations. PiLM leverages reinforcement learning to utilize black box tools directly, adjusting the latent state to control text generation. However, performing backpropagation during the inference phase is time-consuming for PiLM. By replacing backpropagation with a simple regression model, PiLM can achieve an inference time comparable to that of the original LLM. Experiment results show that our approaches in this paper outperform existing state-of-the-art methods that rely on gradient-based, weighted decoding, or prompt-based methodologies. | [
"Yang, Nai-Chi",
"Ma, Wei-Yun",
"Cheng, Pu-Jen"
] | Plug-in Language Model: Controlling Text Generation with a Simple Regression Model | findings-naacl.139 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.140.bib | https://aclanthology.org/2024.findings-naacl.140/ | @inproceedings{honghaofu-etal-2024-signer,
title = "Signer Diversity-driven Data Augmentation for Signer-Independent Sign Language Translation",
author = "Honghaofu, Honghaofu and
Zhang, Liang and
Fu, Biao and
Zhao, Rui and
Su, Jinsong and
Shi, Xiaodong and
Chen, Yidong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.140",
doi = "10.18653/v1/2024.findings-naacl.140",
pages = "2182--2193",
abstract = "The primary objective of sign language translation (SLT) is to transform sign language videos into natural sentences.A crucial challenge in this field is developing signer-independent SLT systems which requires models to generalize effectively to signers not encountered during training.This challenge is exacerbated by the limited diversity of signers in existing SLT datasets, which often results in suboptimal generalization capabilities of current models.Achieving robustness to unseen signers is essential for signer-independent SLT.However, most existing method relies on signer identity labels, which is often impractical and costly in real-world applications.To address this issue, we propose the Signer Diversity-driven Data Augmentation (SDDA) method that can achieve good generalization without relying on signer identity labels. SDDA comprises two data augmentation schemes. The first is data augmentation based on adversarial training, which aims to utilize the gradients of the model to generate adversarial examples. The second is data augmentation based on diffusion model, which focuses on using the advanced diffusion-based text guided image editing method to modify the appearances of the signer in images. The combination of the two strategies significantly enriches the diversity of signers in the training process.Moreover, we introduce a consistency loss and a discrimination loss to enhance the learning of signer-independent features.Our experimental results demonstrate our model significantly enhances the performance of SLT in the signer-independent setting, achieving state-of-the-art results without relying on signer identity labels.",
}
| The primary objective of sign language translation (SLT) is to transform sign language videos into natural sentences.A crucial challenge in this field is developing signer-independent SLT systems which requires models to generalize effectively to signers not encountered during training.This challenge is exacerbated by the limited diversity of signers in existing SLT datasets, which often results in suboptimal generalization capabilities of current models.Achieving robustness to unseen signers is essential for signer-independent SLT.However, most existing method relies on signer identity labels, which is often impractical and costly in real-world applications.To address this issue, we propose the Signer Diversity-driven Data Augmentation (SDDA) method that can achieve good generalization without relying on signer identity labels. SDDA comprises two data augmentation schemes. The first is data augmentation based on adversarial training, which aims to utilize the gradients of the model to generate adversarial examples. The second is data augmentation based on diffusion model, which focuses on using the advanced diffusion-based text guided image editing method to modify the appearances of the signer in images. The combination of the two strategies significantly enriches the diversity of signers in the training process.Moreover, we introduce a consistency loss and a discrimination loss to enhance the learning of signer-independent features.Our experimental results demonstrate our model significantly enhances the performance of SLT in the signer-independent setting, achieving state-of-the-art results without relying on signer identity labels. | [
"Honghaofu, Honghaofu",
"Zhang, Liang",
"Fu, Biao",
"Zhao, Rui",
"Su, Jinsong",
"Shi, Xiaodong",
"Chen, Yidong"
] | Signer Diversity-driven Data Augmentation for Signer-Independent Sign Language Translation | findings-naacl.140 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.141.bib | https://aclanthology.org/2024.findings-naacl.141/ | @inproceedings{meyer-buys-2024-systematic,
title = "A Systematic Analysis of Subwords and Cross-Lingual Transfer in Multilingual Translation",
author = "Meyer, Francois and
Buys, Jan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.141",
doi = "10.18653/v1/2024.findings-naacl.141",
pages = "2194--2200",
abstract = "Multilingual modelling can improve machine translation for low-resource languages, partly through shared subword representations. This paper studies the role of subword segmentation in cross-lingual transfer. We systematically compare the efficacy of several subword methods in promoting synergy and preventing interference across different linguistic typologies. Our findings show that subword regularisation boosts synergy in multilingual modelling, whereas BPE more effectively facilitates transfer during cross-lingual fine-tuning. Notably, our results suggest that differences in orthographic word boundary conventions (the morphological granularity of written words) may impede cross-lingual transfer more significantly than linguistic unrelatedness. Our study confirms that decisions around subword modelling can be key to optimising the benefits of multilingual modelling.",
}
| Multilingual modelling can improve machine translation for low-resource languages, partly through shared subword representations. This paper studies the role of subword segmentation in cross-lingual transfer. We systematically compare the efficacy of several subword methods in promoting synergy and preventing interference across different linguistic typologies. Our findings show that subword regularisation boosts synergy in multilingual modelling, whereas BPE more effectively facilitates transfer during cross-lingual fine-tuning. Notably, our results suggest that differences in orthographic word boundary conventions (the morphological granularity of written words) may impede cross-lingual transfer more significantly than linguistic unrelatedness. Our study confirms that decisions around subword modelling can be key to optimising the benefits of multilingual modelling. | [
"Meyer, Francois",
"Buys, Jan"
] | A Systematic Analysis of Subwords and Cross-Lingual Transfer in Multilingual Translation | findings-naacl.141 | Poster | 2403.20157 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.142.bib | https://aclanthology.org/2024.findings-naacl.142/ | @inproceedings{choi-etal-2024-multi,
title = "Multi-Granularity Guided Fusion-in-Decoder",
author = "Choi, Eunseong and
Lee, Hyeri and
Lee, Jongwuk",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.142",
doi = "10.18653/v1/2024.findings-naacl.142",
pages = "2201--2212",
abstract = "In Open-domain Question Answering (ODQA), it is essential to discern relevant contexts as evidence and avoid spurious ones among retrieved results. The model architecture that uses concatenated multiple contexts in the decoding phase, *i.e.*, Fusion-in-Decoder, demonstrates promising performance but generates incorrect outputs from seemingly plausible contexts. To address this problem, we propose the ***M**ulti-**G**ranularity guided **F**usion-**i**n-**D**ecoder (**MGFiD**)*, discerning evidence across multiple levels of granularity. Based on multi-task learning, MGFiD harmonizes passage re-ranking with sentence classification. It aggregates evident sentences into an *anchor vector* that instructs the decoder. Additionally, it improves decoding efficiency by reusing the results of passage re-ranking for *passage pruning*. Through our experiments, MGFiD outperforms existing models on the Natural Questions (NQ) and TriviaQA (TQA) datasets, highlighting the benefits of its multi-granularity solution.",
}
| In Open-domain Question Answering (ODQA), it is essential to discern relevant contexts as evidence and avoid spurious ones among retrieved results. The model architecture that uses concatenated multiple contexts in the decoding phase, *i.e.*, Fusion-in-Decoder, demonstrates promising performance but generates incorrect outputs from seemingly plausible contexts. To address this problem, we propose the ***M**ulti-**G**ranularity guided **F**usion-**i**n-**D**ecoder (**MGFiD**)*, discerning evidence across multiple levels of granularity. Based on multi-task learning, MGFiD harmonizes passage re-ranking with sentence classification. It aggregates evident sentences into an *anchor vector* that instructs the decoder. Additionally, it improves decoding efficiency by reusing the results of passage re-ranking for *passage pruning*. Through our experiments, MGFiD outperforms existing models on the Natural Questions (NQ) and TriviaQA (TQA) datasets, highlighting the benefits of its multi-granularity solution. | [
"Choi, Eunseong",
"Lee, Hyeri",
"Lee, Jongwuk"
] | Multi-Granularity Guided Fusion-in-Decoder | findings-naacl.142 | Poster | 2404.02581 | [
"https://github.com/eunseongc/mgfid"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.143.bib | https://aclanthology.org/2024.findings-naacl.143/ | @inproceedings{zee-etal-2024-group,
title = "Group Fairness in Multilingual Speech Recognition Models",
author = "Zee, Anna and
Zee, Marc and
S{\o}gaard, Anders",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.143",
doi = "10.18653/v1/2024.findings-naacl.143",
pages = "2213--2226",
abstract = "We evaluate the performance disparity of the Whisper and MMS families of ASR models across the VoxPopuli and Common Voice multilingual datasets, with an eye toward intersectionality. Our two most important findings are that model size, surprisingly, correlates logarithmically with worst-case performance disparities, meaning that larger (and better) models are less fair. We also observe the importance of intersectionality. In particular, models often exhibit significant performance disparity across binary gender for adolescents.",
}
| We evaluate the performance disparity of the Whisper and MMS families of ASR models across the VoxPopuli and Common Voice multilingual datasets, with an eye toward intersectionality. Our two most important findings are that model size, surprisingly, correlates logarithmically with worst-case performance disparities, meaning that larger (and better) models are less fair. We also observe the importance of intersectionality. In particular, models often exhibit significant performance disparity across binary gender for adolescents. | [
"Zee, Anna",
"Zee, Marc",
"S{\\o}gaard, Anders"
] | Group Fairness in Multilingual Speech Recognition Models | findings-naacl.143 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.144.bib | https://aclanthology.org/2024.findings-naacl.144/ | @inproceedings{zhou-etal-2024-rethinking,
title = "Rethinking Machine Ethics {--} Can {LLM}s Perform Moral Reasoning through the Lens of Moral Theories?",
author = "Zhou, Jingyan and
Hu, Minda and
Li, Junan and
Zhang, Xiaoying and
Wu, Xixin and
King, Irwin and
Meng, Helen",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.144",
doi = "10.18653/v1/2024.findings-naacl.144",
pages = "2227--2242",
abstract = "Making moral judgments is an essential step toward developing ethical AI systems. Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality. These approaches have been criticized for potentially overgeneralizing a limited group of annotators{'} moral stances and lacking explainability. This work proposes a flexible top-down framework to steer (Large) Language Models to perform moral reasoning with well-established moral theories from interdisciplinary research. The theory-guided top-down framework can incorporate various moral theories. Our experiments demonstrate the effectiveness of the proposed framework on datasets derived from moral theories. Furthermore, we show the alignment between different moral theories and existing morality datasets. Our analysis exhibits the potential and flaws in existing resources (models and datasets) in developing explainable moral judgment-making systems.",
}
| Making moral judgments is an essential step toward developing ethical AI systems. Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality. These approaches have been criticized for potentially overgeneralizing a limited group of annotators{'} moral stances and lacking explainability. This work proposes a flexible top-down framework to steer (Large) Language Models to perform moral reasoning with well-established moral theories from interdisciplinary research. The theory-guided top-down framework can incorporate various moral theories. Our experiments demonstrate the effectiveness of the proposed framework on datasets derived from moral theories. Furthermore, we show the alignment between different moral theories and existing morality datasets. Our analysis exhibits the potential and flaws in existing resources (models and datasets) in developing explainable moral judgment-making systems. | [
"Zhou, Jingyan",
"Hu, Minda",
"Li, Junan",
"Zhang, Xiaoying",
"Wu, Xixin",
"King, Irwin",
"Meng, Helen"
] | Rethinking Machine Ethics – Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? | findings-naacl.144 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.145.bib | https://aclanthology.org/2024.findings-naacl.145/ | @inproceedings{wang-etal-2024-role,
title = "Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models",
author = "Wang, Rui and
Mi, Fei and
Chen, Yi and
Xue, Boyang and
Wang, Hongru and
Zhu, Qi and
Wong, Kam-Fai and
Xu, Ruifeng",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.145",
doi = "10.18653/v1/2024.findings-naacl.145",
pages = "2243--2255",
abstract = "The growing interest in Large Language Models (LLMs) for specialized applications has revealed a significant challenge: when tailored to specific domains, LLMs tend to experience catastrophic forgetting, compromising their general capabilities and leading to a suboptimal user experience. Additionally, crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance due to confusion between domains. In response to these issues, we present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy. This novel approach effectively manages multi-domain LLM adaptation through three key components: 1) Self-Distillation constructs and replays general-domain exemplars to alleviate catastrophic forgetting. 2) Role Prompting assigns a central prompt to the general domain and a unique role prompt to each specific domain to minimize inter-domain confusion during training. 3) Role Integration reuses and integrates a small portion of domain-specific data to the general-domain data, which are trained under the guidance of the central prompt. The central prompt is used for a streamlined inference process, removing the necessity to switch prompts for different domains.Empirical results demonstrate that REGA effectively alleviates catastrophic forgetting and inter-domain confusion. This leads to improved domain-specific performance compared to standard fine-tuned models, while still preserving robust general capabilities.",
}
| The growing interest in Large Language Models (LLMs) for specialized applications has revealed a significant challenge: when tailored to specific domains, LLMs tend to experience catastrophic forgetting, compromising their general capabilities and leading to a suboptimal user experience. Additionally, crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance due to confusion between domains. In response to these issues, we present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy. This novel approach effectively manages multi-domain LLM adaptation through three key components: 1) Self-Distillation constructs and replays general-domain exemplars to alleviate catastrophic forgetting. 2) Role Prompting assigns a central prompt to the general domain and a unique role prompt to each specific domain to minimize inter-domain confusion during training. 3) Role Integration reuses and integrates a small portion of domain-specific data to the general-domain data, which are trained under the guidance of the central prompt. The central prompt is used for a streamlined inference process, removing the necessity to switch prompts for different domains.Empirical results demonstrate that REGA effectively alleviates catastrophic forgetting and inter-domain confusion. This leads to improved domain-specific performance compared to standard fine-tuned models, while still preserving robust general capabilities. | [
"Wang, Rui",
"Mi, Fei",
"Chen, Yi",
"Xue, Boyang",
"Wang, Hongru",
"Zhu, Qi",
"Wong, Kam-Fai",
"Xu, Ruifeng"
] | Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models | findings-naacl.145 | Poster | 2403.02756 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.146.bib | https://aclanthology.org/2024.findings-naacl.146/ | @inproceedings{feger-dietze-2024-bertweets,
title = "{BERT}weet{'}s {TACO} Fiesta: Contrasting Flavors On The Path Of Inference And Information-Driven Argument Mining On {T}witter",
author = "Feger, Marc and
Dietze, Stefan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.146",
doi = "10.18653/v1/2024.findings-naacl.146",
pages = "2256--2266",
abstract = "Argument mining, dealing with the classification of text based on inference and information, denotes a challenging analytical task in the rich context of Twitter (now $\mathbb{X}$), a key platform for online discourse and exchange. Thereby, Twitter offers a diverse repository of short messages bearing on both of these elements. For text classification, transformer approaches, particularly BERT, offer state-of-the-art solutions. Our study delves into optimizing the embeddings of the understudied BERTweet transformer for argument mining on Twitter and broader generalization across topics.We explore the impact of pre-classification fine-tuning by aligning similar manifestations of inference and information while contrasting dissimilar instances. Using the TACO dataset, our approach augments tweets for optimizing BERTweet in a Siamese network, strongly improving classification and cross-topic generalization compared to standard methods.Overall, we contribute the transformer WRAPresentations and classifier WRAP, scoring 86.62{\%} F1 for inference detection, 86.30{\%} for information recognition, and 75.29{\%} across four combinations of these elements, to enhance inference and information-driven argument mining on Twitter.",
}
| Argument mining, dealing with the classification of text based on inference and information, denotes a challenging analytical task in the rich context of Twitter (now $\mathbb{X}$), a key platform for online discourse and exchange. Thereby, Twitter offers a diverse repository of short messages bearing on both of these elements. For text classification, transformer approaches, particularly BERT, offer state-of-the-art solutions. Our study delves into optimizing the embeddings of the understudied BERTweet transformer for argument mining on Twitter and broader generalization across topics.We explore the impact of pre-classification fine-tuning by aligning similar manifestations of inference and information while contrasting dissimilar instances. Using the TACO dataset, our approach augments tweets for optimizing BERTweet in a Siamese network, strongly improving classification and cross-topic generalization compared to standard methods.Overall, we contribute the transformer WRAPresentations and classifier WRAP, scoring 86.62{\%} F1 for inference detection, 86.30{\%} for information recognition, and 75.29{\%} across four combinations of these elements, to enhance inference and information-driven argument mining on Twitter. | [
"Feger, Marc",
"Dietze, Stefan"
] | BERTweet's TACO Fiesta: Contrasting Flavors On The Path Of Inference And Information-Driven Argument Mining On Twitter | findings-naacl.146 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.147.bib | https://aclanthology.org/2024.findings-naacl.147/ | @inproceedings{guzman-etal-2024-testing,
title = "Testing the limits of logical reasoning in neural and hybrid models",
author = "Vargas Guzm{\'a}n, Manuel and
Szymanik, Jakub and
Malicki, Maciej",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.147",
doi = "10.18653/v1/2024.findings-naacl.147",
pages = "2267--2279",
abstract = "We study the ability of neural and hybrid models to generalize logical reasoning patterns. We created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study the syllogistic logic in hybrid models, where the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional, and transformer architectures. Our experiments demonstrate that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree.",
}
| We study the ability of neural and hybrid models to generalize logical reasoning patterns. We created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study the syllogistic logic in hybrid models, where the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional, and transformer architectures. Our experiments demonstrate that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree. | [
"Vargas Guzm{\\'a}n, Manuel",
"Szymanik, Jakub",
"Malicki, Maciej"
] | Testing the limits of logical reasoning in neural and hybrid models | findings-naacl.147 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.148.bib | https://aclanthology.org/2024.findings-naacl.148/ | @inproceedings{hada-etal-2024-metal,
title = "{METAL}: Towards Multilingual Meta-Evaluation",
author = "Hada, Rishav and
Gumma, Varun and
Ahmed, Mohamed and
Bali, Kalika and
Sitaram, Sunayana",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.148",
doi = "10.18653/v1/2024.findings-naacl.148",
pages = "2280--2298",
abstract = "With the rising human-like precision of Large Language Models (LLMs) in numerous tasks, their utilization in a variety of real-world applications is becoming more prevalent. Several studies have shown that LLMs excel on many standard NLP benchmarks. However, it is challenging to evaluate LLMs due to test dataset contamination and the limitations of traditional metrics. Since human evaluations are difficult to collect, there is a growing interest in the community to use LLMs themselves as reference-free evaluators for subjective metrics. However, past work has shown that LLM-based evaluators can exhibit bias and have poor alignment with human judgments. In this study, we propose a framework for an end-to-end assessment of LLMs as evaluators in multilingual scenarios. We create a carefully curated dataset, covering 10 languages containing native speaker judgments for the task of summarization. This dataset is created specifically to evaluate LLM-based evaluators, which we refer to as meta-evaluation (METAL). We compare the performance of LLM-based evaluators created using GPT-3.5-Turbo, GPT-4, and PaLM2. Our results indicate that LLM-based evaluators based on GPT-4 perform the best across languages, while GPT-3.5-Turbo performs poorly. Additionally, we perform an analysis of the reasoning provided by LLM-based evaluators and find that it often does not match the reasoning provided by human judges.",
}
| With the rising human-like precision of Large Language Models (LLMs) in numerous tasks, their utilization in a variety of real-world applications is becoming more prevalent. Several studies have shown that LLMs excel on many standard NLP benchmarks. However, it is challenging to evaluate LLMs due to test dataset contamination and the limitations of traditional metrics. Since human evaluations are difficult to collect, there is a growing interest in the community to use LLMs themselves as reference-free evaluators for subjective metrics. However, past work has shown that LLM-based evaluators can exhibit bias and have poor alignment with human judgments. In this study, we propose a framework for an end-to-end assessment of LLMs as evaluators in multilingual scenarios. We create a carefully curated dataset, covering 10 languages containing native speaker judgments for the task of summarization. This dataset is created specifically to evaluate LLM-based evaluators, which we refer to as meta-evaluation (METAL). We compare the performance of LLM-based evaluators created using GPT-3.5-Turbo, GPT-4, and PaLM2. Our results indicate that LLM-based evaluators based on GPT-4 perform the best across languages, while GPT-3.5-Turbo performs poorly. Additionally, we perform an analysis of the reasoning provided by LLM-based evaluators and find that it often does not match the reasoning provided by human judges. | [
"Hada, Rishav",
"Gumma, Varun",
"Ahmed, Mohamed",
"Bali, Kalika",
"Sitaram, Sunayana"
] | METAL: Towards Multilingual Meta-Evaluation | findings-naacl.148 | Poster | 2404.01667 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.149.bib | https://aclanthology.org/2024.findings-naacl.149/ | @inproceedings{zhong-etal-2024-agieval,
title = "{AGIE}val: A Human-Centric Benchmark for Evaluating Foundation Models",
author = "Zhong, Wanjun and
Cui, Ruixiang and
Guo, Yiduo and
Liang, Yaobo and
Lu, Shuai and
Wang, Yanlin and
Saied, Amin and
Chen, Weizhu and
Duan, Nan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.149",
doi = "10.18653/v1/2024.findings-naacl.149",
pages = "2299--2314",
abstract = "Assessing foundation models{'} abilities for human-level tasks is crucial for Artificial General Intelligence (AGI) development.Traditional benchmarks, which rely on artificial datasets, may not accurately represent these capabilities. In this paper, we introduce AGIEval, a novel bilingual benchmark designed to assess foundation models in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models on our benchmark. Impressively, we show that GPT-4 exceeds the average human performance in SAT, LSAT, and math contests, with 95{\%} accuracy on SAT Math and 92.5{\%} on the Chinese college entrance English exam. This demonstrates the exceptional performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks requiring complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal their strengths and limitations, providing valuable insights into future directions for enhancing general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a meaningful and robust evaluation of foundation models{'} performance in real-world scenarios.",
}
| Assessing foundation models{'} abilities for human-level tasks is crucial for Artificial General Intelligence (AGI) development.Traditional benchmarks, which rely on artificial datasets, may not accurately represent these capabilities. In this paper, we introduce AGIEval, a novel bilingual benchmark designed to assess foundation models in the context of human-centric standardized exams, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. We evaluate several state-of-the-art foundation models on our benchmark. Impressively, we show that GPT-4 exceeds the average human performance in SAT, LSAT, and math contests, with 95{\%} accuracy on SAT Math and 92.5{\%} on the Chinese college entrance English exam. This demonstrates the exceptional performance of contemporary foundation models. In contrast, we also find that GPT-4 is less proficient in tasks requiring complex reasoning or specific domain knowledge. Our comprehensive analyses of model capabilities (understanding, knowledge, reasoning, and calculation) reveal their strengths and limitations, providing valuable insights into future directions for enhancing general capabilities. By concentrating on tasks pertinent to human cognition and decision-making, our benchmark delivers a meaningful and robust evaluation of foundation models{'} performance in real-world scenarios. | [
"Zhong, Wanjun",
"Cui, Ruixiang",
"Guo, Yiduo",
"Liang, Yaobo",
"Lu, Shuai",
"Wang, Yanlin",
"Saied, Amin",
"Chen, Weizhu",
"Duan, Nan"
] | AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models | findings-naacl.149 | Poster | 2304.06364 | [
"https://github.com/ruixiangcui/agieval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.150.bib | https://aclanthology.org/2024.findings-naacl.150/ | @inproceedings{siledar-etal-2024-product,
title = "Product Description and {QA} Assisted Self-Supervised Opinion Summarization",
author = "Siledar, Tejpalsingh and
Rangaraju, Rupasai and
Muddu, Sankara and
Banerjee, Suman and
Patil, Amey and
Singh, Sudhanshu and
Chelliah, Muthusamy and
Garera, Nikesh and
Nath, Swaprava and
Bhattacharyya, Pushpak",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.150",
doi = "10.18653/v1/2024.findings-naacl.150",
pages = "2315--2332",
abstract = "In e-commerce, opinion summarization is the process of summarizing the consensus opinions found in product reviews. However, the potential of additional sources such as product description and question-answers (QA) has been considered less often. Moreover, the absence of any supervised training data makes this task challenging. To address this, we propose a novel synthetic dataset creation (SDC) strategy that leverages information from reviews as well as additional sources for selecting one of the reviews as a pseudo-summary to enable supervised training. Our Multi-Encoder Decoder framework for Opinion Summarization (MEDOS) employs a separate encoder for each source, enabling effective selection of information while generating the summary. For evaluation, due to the unavailability of test sets with additional sources, we extend the Amazon, Oposum+, and Flipkart test sets and leverage ChatGPT to annotate summaries. Experiments across nine test sets demonstrate that the combination of our SDC approach and MEDOS model achieves on average a 14.5{\%} improvement in ROUGE-1 F1 over the SOTA. Moreover, comparative analysis underlines the significance of incorporating additional sources for generating more informative summaries. Human evaluations further indicate that MEDOS scores relatively higher in coherence and fluency with 0.41 and 0.5 (â1 to 1) respectively, compared to existing models. To the best of our knowledge, we are the first to generate opinion summaries leveraging additional sources in a self-supervised setting.",
}
| In e-commerce, opinion summarization is the process of summarizing the consensus opinions found in product reviews. However, the potential of additional sources such as product description and question-answers (QA) has been considered less often. Moreover, the absence of any supervised training data makes this task challenging. To address this, we propose a novel synthetic dataset creation (SDC) strategy that leverages information from reviews as well as additional sources for selecting one of the reviews as a pseudo-summary to enable supervised training. Our Multi-Encoder Decoder framework for Opinion Summarization (MEDOS) employs a separate encoder for each source, enabling effective selection of information while generating the summary. For evaluation, due to the unavailability of test sets with additional sources, we extend the Amazon, Oposum+, and Flipkart test sets and leverage ChatGPT to annotate summaries. Experiments across nine test sets demonstrate that the combination of our SDC approach and MEDOS model achieves on average a 14.5{\%} improvement in ROUGE-1 F1 over the SOTA. Moreover, comparative analysis underlines the significance of incorporating additional sources for generating more informative summaries. Human evaluations further indicate that MEDOS scores relatively higher in coherence and fluency with 0.41 and 0.5 (â1 to 1) respectively, compared to existing models. To the best of our knowledge, we are the first to generate opinion summaries leveraging additional sources in a self-supervised setting. | [
"Siledar, Tejpalsingh",
"Rangaraju, Rupasai",
"Muddu, Sankara",
"Banerjee, Suman",
"Patil, Amey",
"Singh, Sudhanshu",
"Chelliah, Muthusamy",
"Garera, Nikesh",
"Nath, Swaprava",
"Bhattacharyya, Pushpak"
] | Product Description and QA Assisted Self-Supervised Opinion Summarization | findings-naacl.150 | Poster | 2404.05243 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.151.bib | https://aclanthology.org/2024.findings-naacl.151/ | @inproceedings{qiao-etal-2024-comem,
title = "{COMEM}: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language Models",
author = "Qiao, Shanbao and
Liu, Xuebing and
Na, Seung-Hoon",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.151",
doi = "10.18653/v1/2024.findings-naacl.151",
pages = "2333--2347",
abstract = "Noting that world knowledge continuously evolves over time, large language models (LLMs) need to be properly adjusted by performing the {``}knowledge editing{''}, which involves updating outdated information or correcting false information. To achieve reliable and {``}massive{''} editing capabilities in terms of $\textit{generalization}$ and $\textit{specificity}$, this paper proposes a unified knowledge editing method called in-$\textbf{CO}$ntext retrieval-augmented $\textbf{M}$ass-$\textbf{E}$diting $\textbf{M}$emory (COMEM), which combines two types of editing approaches: parameter updating and in-context knowledge editing (IKE). In particular, COMEM incorporates $\textit{retrieval-augmented IKE}$, a novel extension of IKE designed for massive editing tasks, based on an $\textit{updating}$-aware demonstration construction.Experimental results on the zsRE and CounterFact datasets demonstrate that COMEM outperforms all existing methods, achieving state-of-the-art performance. Our code is available at https://github.com/JoveReCode/COMEM.git.",
}
| Noting that world knowledge continuously evolves over time, large language models (LLMs) need to be properly adjusted by performing the {``}knowledge editing{''}, which involves updating outdated information or correcting false information. To achieve reliable and {``}massive{''} editing capabilities in terms of $\textit{generalization}$ and $\textit{specificity}$, this paper proposes a unified knowledge editing method called in-$\textbf{CO}$ntext retrieval-augmented $\textbf{M}$ass-$\textbf{E}$diting $\textbf{M}$emory (COMEM), which combines two types of editing approaches: parameter updating and in-context knowledge editing (IKE). In particular, COMEM incorporates $\textit{retrieval-augmented IKE}$, a novel extension of IKE designed for massive editing tasks, based on an $\textit{updating}$-aware demonstration construction.Experimental results on the zsRE and CounterFact datasets demonstrate that COMEM outperforms all existing methods, achieving state-of-the-art performance. Our code is available at https://github.com/JoveReCode/COMEM.git. | [
"Qiao, Shanbao",
"Liu, Xuebing",
"Na, Seung-Hoon"
] | COMEM: In-Context Retrieval-Augmented Mass-Editing Memory in Large Language Models | findings-naacl.151 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.152.bib | https://aclanthology.org/2024.findings-naacl.152/ | @inproceedings{tanaka-etal-2024-content,
title = "Content-Specific Humorous Image Captioning Using Incongruity Resolution Chain-of-Thought",
author = "Tanaka, Kohtaro and
Uehara, Kohei and
Gu, Lin and
Mukuta, Yusuke and
Harada, Tatsuya",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.152",
doi = "10.18653/v1/2024.findings-naacl.152",
pages = "2348--2367",
abstract = "Although automated image captioning methods have benefited considerably from the development of large language models (LLMs), generating humorous captions is still a challenging task. Humorous captions generated by humans are unique to the image and reflect the content of the image. However, captions generated using previous captioning models tend to be generic. Therefore, we propose incongruity-resolution chain-of-thought (IRCoT) as a novel prompting framework that creates content-specific resolutions from fine details extracted from an image. Furthermore, we integrate logit bias and negative sampling to suppress the output of generic resolutions. The results of experiments with GPT4-V demonstrate that our proposed framework effectively generated humorous captions tailored to the content of specific input images.",
}
| Although automated image captioning methods have benefited considerably from the development of large language models (LLMs), generating humorous captions is still a challenging task. Humorous captions generated by humans are unique to the image and reflect the content of the image. However, captions generated using previous captioning models tend to be generic. Therefore, we propose incongruity-resolution chain-of-thought (IRCoT) as a novel prompting framework that creates content-specific resolutions from fine details extracted from an image. Furthermore, we integrate logit bias and negative sampling to suppress the output of generic resolutions. The results of experiments with GPT4-V demonstrate that our proposed framework effectively generated humorous captions tailored to the content of specific input images. | [
"Tanaka, Kohtaro",
"Uehara, Kohei",
"Gu, Lin",
"Mukuta, Yusuke",
"Harada, Tatsuya"
] | Content-Specific Humorous Image Captioning Using Incongruity Resolution Chain-of-Thought | findings-naacl.152 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.153.bib | https://aclanthology.org/2024.findings-naacl.153/ | @inproceedings{bassani-etal-2024-denoising,
title = "Denoising Attention for Query-aware User Modeling",
author = "Bassani, Elias and
Kasela, Pranav and
Pasi, Gabriella",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.153",
doi = "10.18653/v1/2024.findings-naacl.153",
pages = "2368--2380",
abstract = "Personalization of search results has gained increasing attention in the past few years, also thanks to the development of Neural Networks-based approaches for Information Retrieval. Recent works have proposed to build user models at query time by leveraging the Attention mechanism, which allows weighing the contribution of the user-related information w.r.t. the current query.This approach allows giving more importance to the user{'}s interests related to the current search performed by the user.In this paper, we discuss some shortcomings of the Attention mechanism when employed for personalization and introduce a novel Attention variant, the Denoising Attention, to solve them.Denoising Attention adopts a robust normalization scheme and introduces a filtering mechanism to better discern among the user-related data those helpful for personalization.Experimental evaluation shows improvements in MAP, MRR, and NDCG above 15{\%} w.r.t. other Attention variants at the state-of-the-art.",
}
| Personalization of search results has gained increasing attention in the past few years, also thanks to the development of Neural Networks-based approaches for Information Retrieval. Recent works have proposed to build user models at query time by leveraging the Attention mechanism, which allows weighing the contribution of the user-related information w.r.t. the current query.This approach allows giving more importance to the user{'}s interests related to the current search performed by the user.In this paper, we discuss some shortcomings of the Attention mechanism when employed for personalization and introduce a novel Attention variant, the Denoising Attention, to solve them.Denoising Attention adopts a robust normalization scheme and introduces a filtering mechanism to better discern among the user-related data those helpful for personalization.Experimental evaluation shows improvements in MAP, MRR, and NDCG above 15{\%} w.r.t. other Attention variants at the state-of-the-art. | [
"Bassani, Elias",
"Kasela, Pranav",
"Pasi, Gabriella"
] | Denoising Attention for Query-aware User Modeling | findings-naacl.153 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.154.bib | https://aclanthology.org/2024.findings-naacl.154/ | @inproceedings{zhang-etal-2024-lightweight,
title = "A Lightweight Mixture-of-Experts Neural Machine Translation Model with Stage-wise Training Strategy",
author = "Zhang, Fan and
Tu, Mei and
Liu, Song and
Yan, Jinyao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.154",
doi = "10.18653/v1/2024.findings-naacl.154",
pages = "2381--2392",
abstract = "Dealing with language heterogeneity has always been one of the challenges in neural machine translation (NMT).The idea of using mixture-of-experts (MoE) naturally excels in addressing this issue by employing different experts to take responsibility for different problems.However, the parameter-inefficiency problem in MoE results in less performance improvement when boosting the number of parameters.Moreover, most of the MoE models are suffering from the training instability problem.This paper proposes MoA (Mixture-of-Adapters), a lightweight MoE-based NMT model that is trained via an elaborately designed stage-wise training strategy.With the standard Transformer as the backbone model, we introduce lightweight adapters as experts for easy expansion.To improve the parameter efficiency, we explicitly model and distill the language heterogeneity into the gating network with clustering.After freezing the gating network, we adopt the Gumbel-Max sampling as the routing scheme when training experts to balance the knowledge of generalization and specialization while preventing expert over-fitting.Empirical results show that MoA achieves stable improvements in different translation tasks by introducing much fewer extra parameters compared to other MoE baselines.Additionally, the performance evaluations on a multi-domain translation task illustrate the effectiveness of our training strategy.",
}
| Dealing with language heterogeneity has always been one of the challenges in neural machine translation (NMT).The idea of using mixture-of-experts (MoE) naturally excels in addressing this issue by employing different experts to take responsibility for different problems.However, the parameter-inefficiency problem in MoE results in less performance improvement when boosting the number of parameters.Moreover, most of the MoE models are suffering from the training instability problem.This paper proposes MoA (Mixture-of-Adapters), a lightweight MoE-based NMT model that is trained via an elaborately designed stage-wise training strategy.With the standard Transformer as the backbone model, we introduce lightweight adapters as experts for easy expansion.To improve the parameter efficiency, we explicitly model and distill the language heterogeneity into the gating network with clustering.After freezing the gating network, we adopt the Gumbel-Max sampling as the routing scheme when training experts to balance the knowledge of generalization and specialization while preventing expert over-fitting.Empirical results show that MoA achieves stable improvements in different translation tasks by introducing much fewer extra parameters compared to other MoE baselines.Additionally, the performance evaluations on a multi-domain translation task illustrate the effectiveness of our training strategy. | [
"Zhang, Fan",
"Tu, Mei",
"Liu, Song",
"Yan, Jinyao"
] | A Lightweight Mixture-of-Experts Neural Machine Translation Model with Stage-wise Training Strategy | findings-naacl.154 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.155.bib | https://aclanthology.org/2024.findings-naacl.155/ | @inproceedings{wiland-etal-2024-bear,
title = "{BEAR}: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models",
author = "Wiland, Jacek and
Ploner, Max and
Akbik, Alan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.155",
doi = "10.18653/v1/2024.findings-naacl.155",
pages = "2393--2411",
abstract = "Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training. Probing is an inexpensive way to compare LMs of different sizes and training configurations. However, previous approaches rely on the objective function used in pre-training LMs and are thus applicable only to masked or causal LMs. As a result, comparing different types of LMs becomes impossible. To address this, we propose an approach that uses an LM{'}s inherent ability to estimate the log-likelihood of any given textual statement. We carefully design an evaluation dataset of 7,731 instances (40,916 in a larger variant) from which we produce alternative statements for each relational fact, one of which is correct. We then evaluate whether an LM correctly assigns the highest log-likelihood to the correct statement. Our experimental evaluation of 22 common LMs shows that our proposed framework, BEAR, can effectively probe for knowledge across different LM types. We release the BEAR datasets and an open-source framework that implements the probing approach to the research community to facilitate the evaluation and development of LMs.",
}
| Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training. Probing is an inexpensive way to compare LMs of different sizes and training configurations. However, previous approaches rely on the objective function used in pre-training LMs and are thus applicable only to masked or causal LMs. As a result, comparing different types of LMs becomes impossible. To address this, we propose an approach that uses an LM{'}s inherent ability to estimate the log-likelihood of any given textual statement. We carefully design an evaluation dataset of 7,731 instances (40,916 in a larger variant) from which we produce alternative statements for each relational fact, one of which is correct. We then evaluate whether an LM correctly assigns the highest log-likelihood to the correct statement. Our experimental evaluation of 22 common LMs shows that our proposed framework, BEAR, can effectively probe for knowledge across different LM types. We release the BEAR datasets and an open-source framework that implements the probing approach to the research community to facilitate the evaluation and development of LMs. | [
"Wil",
", Jacek",
"Ploner, Max",
"Akbik, Alan"
] | BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models | findings-naacl.155 | Poster | 2404.04113 | [
"https://github.com/lm-pub-quiz/lm-pub-quiz"
] | https://huggingface.co/papers/2404.04113 | 0 | 3 | 1 | 3 | 1 | [] | [
"lm-pub-quiz/BEAR"
] | [] |
https://aclanthology.org/2024.findings-naacl.156.bib | https://aclanthology.org/2024.findings-naacl.156/ | @inproceedings{hengst-etal-2024-conformal,
title = "Conformal Intent Classification and Clarification for Fast and Accurate Intent Recognition",
author = "Hengst, Floris and
Wolter, Ralf and
Altmeyer, Patrick and
Kaygan, Arda",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.156",
doi = "10.18653/v1/2024.findings-naacl.156",
pages = "2412--2432",
abstract = "We present Conformal Intent Classification and Clarification (CICC), a framework for fast and accurate intent classification for task-oriented dialogue systems. The framework turns heuristic uncertainty scores of any intent classifier into a clarification question that is guaranteed to contain the true intent at a pre-defined confidence level.By disambiguating between a small number of likely intents, the user query can be resolved quickly and accurately. Additionally, we propose to augment the framework for out-of-scope detection.In a comparative evaluation using seven intent recognition datasets we find that CICC generates small clarification questions and is capable of out-of-scope detection.CICC can help practitioners and researchers substantially in improving the user experience of dialogue agents with specific clarification questions.",
}
| We present Conformal Intent Classification and Clarification (CICC), a framework for fast and accurate intent classification for task-oriented dialogue systems. The framework turns heuristic uncertainty scores of any intent classifier into a clarification question that is guaranteed to contain the true intent at a pre-defined confidence level.By disambiguating between a small number of likely intents, the user query can be resolved quickly and accurately. Additionally, we propose to augment the framework for out-of-scope detection.In a comparative evaluation using seven intent recognition datasets we find that CICC generates small clarification questions and is capable of out-of-scope detection.CICC can help practitioners and researchers substantially in improving the user experience of dialogue agents with specific clarification questions. | [
"Hengst, Floris",
"Wolter, Ralf",
"Altmeyer, Patrick",
"Kaygan, Arda"
] | Conformal Intent Classification and Clarification for Fast and Accurate Intent Recognition | findings-naacl.156 | Poster | 2403.18973 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.157.bib | https://aclanthology.org/2024.findings-naacl.157/ | @inproceedings{nyffenegger-etal-2024-anonymity,
title = "Anonymity at Risk? Assessing Re-Identification Capabilities of Large Language Models in Court Decisions",
author = {Nyffenegger, Alex and
St{\"u}rmer, Matthias and
Niklaus, Joel},
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.157",
doi = "10.18653/v1/2024.findings-naacl.157",
pages = "2433--2462",
abstract = "Anonymity in court rulings is a critical aspect of privacy protection in the European Union and Switzerland but with the advent of LLMs, concerns about large-scale re-identification of anonymized persons are growing. In accordance with the Federal Supreme Court of Switzerland (FSCS), we study re-identification risks using actual legal data. Following the initial experiment, we constructed an anonymized Wikipedia dataset as a more rigorous testing ground to further investigate the findings. In addition to the datasets, we also introduce new metrics to measure performance. We systematically analyze the factors that influence successful re-identifications, identifying model size, input length, and instruction tuning among the most critical determinants. Despite high re-identification rates on Wikipedia, even the best LLMs struggled with court decisions. We demonstrate that for now, the risk of re-identifications using LLMs is minimal in the vast majority of cases. We hope that our system can help enhance the confidence in the security of anonymized decisions, thus leading the courts to publish more decisions.",
}
| Anonymity in court rulings is a critical aspect of privacy protection in the European Union and Switzerland but with the advent of LLMs, concerns about large-scale re-identification of anonymized persons are growing. In accordance with the Federal Supreme Court of Switzerland (FSCS), we study re-identification risks using actual legal data. Following the initial experiment, we constructed an anonymized Wikipedia dataset as a more rigorous testing ground to further investigate the findings. In addition to the datasets, we also introduce new metrics to measure performance. We systematically analyze the factors that influence successful re-identifications, identifying model size, input length, and instruction tuning among the most critical determinants. Despite high re-identification rates on Wikipedia, even the best LLMs struggled with court decisions. We demonstrate that for now, the risk of re-identifications using LLMs is minimal in the vast majority of cases. We hope that our system can help enhance the confidence in the security of anonymized decisions, thus leading the courts to publish more decisions. | [
"Nyffenegger, Alex",
"St{\\\"u}rmer, Matthias",
"Niklaus, Joel"
] | Anonymity at Risk? Assessing Re-Identification Capabilities of Large Language Models in Court Decisions | findings-naacl.157 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.158.bib | https://aclanthology.org/2024.findings-naacl.158/ | @inproceedings{shin-etal-2024-x,
title = "{X}-{LL}a{VA}: Optimizing Bilingual Large Vision-Language Alignment",
author = "Shin, DongJae and
Lim, HyeonSeok and
Won, Inho and
Choi, ChangSu and
Kim, Minjun and
Song, SeungWoo and
Yoo, HanGyeol and
Kim, SangMin and
Lim, KyungTae",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.158",
doi = "10.18653/v1/2024.findings-naacl.158",
pages = "2463--2473",
abstract = "The impressive development of large language models (LLMs) is expanding into the realm of large multimodal models (LMMs), which incorporate multiple types of data beyond text. However, the nature of multimodal models leads to significant expenses in the creation of training data. Furthermore, constructing multilingual data for LMMs presents its own set of challenges due to language diversity and complexity. Therefore, in this study, we propose two cost-effective methods to solve this problem: (1) vocabulary expansion and pretraining of multilingual LLM for specific languages, and (2) automatic and elaborate construction of multimodal datasets using GPT4-V. Based on these methods, we constructed a 91K English-Korean-Chinese multilingual, multimodal training dataset. Additionally, we developed a bilingual multimodal model that exhibits excellent performance in both Korean and English, surpassing existing approaches.",
}
| The impressive development of large language models (LLMs) is expanding into the realm of large multimodal models (LMMs), which incorporate multiple types of data beyond text. However, the nature of multimodal models leads to significant expenses in the creation of training data. Furthermore, constructing multilingual data for LMMs presents its own set of challenges due to language diversity and complexity. Therefore, in this study, we propose two cost-effective methods to solve this problem: (1) vocabulary expansion and pretraining of multilingual LLM for specific languages, and (2) automatic and elaborate construction of multimodal datasets using GPT4-V. Based on these methods, we constructed a 91K English-Korean-Chinese multilingual, multimodal training dataset. Additionally, we developed a bilingual multimodal model that exhibits excellent performance in both Korean and English, surpassing existing approaches. | [
"Shin, DongJae",
"Lim, HyeonSeok",
"Won, Inho",
"Choi, ChangSu",
"Kim, Minjun",
"Song, SeungWoo",
"Yoo, HanGyeol",
"Kim, SangMin",
"Lim, KyungTae"
] | X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment | findings-naacl.158 | Poster | 2403.11399 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.159.bib | https://aclanthology.org/2024.findings-naacl.159/ | @inproceedings{hong-etal-2024-gullible,
title = "Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise",
author = "Hong, Giwon and
Kim, Jeonghwan and
Kang, Junmo and
Myaeng, Sung-Hyon and
Whang, Joyce",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.159",
doi = "10.18653/v1/2024.findings-naacl.159",
pages = "2474--2495",
abstract = "Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance. Our work investigates a more challenging scenario in which even the {``}relevant{''} documents may contain misleading or incorrect information, causing conflict among the retrieved documents and thereby negatively influencing model decisions as noise. We observe that existing LMs are highly brittle to the presence of conflicting information in both the fine-tuning and in-context few-shot learning scenarios. We propose approaches for handling knowledge conflicts among retrieved documents by explicitly fine-tuning a discriminator or prompting GPT-3.5 to elicit its discriminative capability. Our empirical results on open-domain QA show that these approaches significantly enhance model robustness. We also provide our findings on incorporating the fine-tuned discriminator{'}s decision into the in-context learning process, proposing a way to exploit the benefits of two disparate learning schemes. Alongside our findings, we provide MacNoise, a machine-generated, conflict-induced dataset to further encourage research in this direction.",
}
| Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance. Our work investigates a more challenging scenario in which even the {``}relevant{''} documents may contain misleading or incorrect information, causing conflict among the retrieved documents and thereby negatively influencing model decisions as noise. We observe that existing LMs are highly brittle to the presence of conflicting information in both the fine-tuning and in-context few-shot learning scenarios. We propose approaches for handling knowledge conflicts among retrieved documents by explicitly fine-tuning a discriminator or prompting GPT-3.5 to elicit its discriminative capability. Our empirical results on open-domain QA show that these approaches significantly enhance model robustness. We also provide our findings on incorporating the fine-tuned discriminator{'}s decision into the in-context learning process, proposing a way to exploit the benefits of two disparate learning schemes. Alongside our findings, we provide MacNoise, a machine-generated, conflict-induced dataset to further encourage research in this direction. | [
"Hong, Giwon",
"Kim, Jeonghwan",
"Kang, Junmo",
"Myaeng, Sung-Hyon",
"Whang, Joyce"
] | Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise | findings-naacl.159 | Poster | 2305.01579 | [
"https://github.com/wjdghks950/discern-and-answer"
] | https://huggingface.co/papers/2305.01579 | 2 | 2 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.160.bib | https://aclanthology.org/2024.findings-naacl.160/ | @inproceedings{chetia-phukan-etal-2024-heterogeneity,
title = "Heterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio Deepfake",
author = "Chetia Phukan, Orchid and
Kashyap, Gautam and
Buduru, Arun Balaji and
Sharma, Rajesh",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.160",
doi = "10.18653/v1/2024.findings-naacl.160",
pages = "2496--2506",
abstract = "In this work, we investigate multilingual speech Pre-Trained models (PTMs) for Audio deepfake detection (ADD). We hypothesize thatmultilingual PTMs trained on large-scale diverse multilingual data gain knowledge about diverse pitches, accents, and tones, during theirpre-training phase and making them more robust to variations. As a result, they will be more effective for detecting audio deepfakes. To validate our hypothesis, we extract representations from state-of-the-art (SOTA) PTMs including monolingual, multilingual as well as PTMs trained for speaker and emotion recognition, and evaluated them on ASVSpoof 2019 (ASV), In-the-Wild (ITW), and DECRO benchmark databases. We show that representations from multilingual PTMs, with simple downstream networks, attain the best performance for ADD compared to other PTM representations, which validates our hypothesis. We also explore the possibility of fusion of selected PTM representations for further improvements in ADD, and we propose a framework, MiO (Merge into One) for this purpose. With MiO, we achieve SOTA performance on ASV and ITW and comparable performance on DECRO with current SOTA works.",
}
| In this work, we investigate multilingual speech Pre-Trained models (PTMs) for Audio deepfake detection (ADD). We hypothesize thatmultilingual PTMs trained on large-scale diverse multilingual data gain knowledge about diverse pitches, accents, and tones, during theirpre-training phase and making them more robust to variations. As a result, they will be more effective for detecting audio deepfakes. To validate our hypothesis, we extract representations from state-of-the-art (SOTA) PTMs including monolingual, multilingual as well as PTMs trained for speaker and emotion recognition, and evaluated them on ASVSpoof 2019 (ASV), In-the-Wild (ITW), and DECRO benchmark databases. We show that representations from multilingual PTMs, with simple downstream networks, attain the best performance for ADD compared to other PTM representations, which validates our hypothesis. We also explore the possibility of fusion of selected PTM representations for further improvements in ADD, and we propose a framework, MiO (Merge into One) for this purpose. With MiO, we achieve SOTA performance on ASV and ITW and comparable performance on DECRO with current SOTA works. | [
"Chetia Phukan, Orchid",
"Kashyap, Gautam",
"Buduru, Arun Balaji",
"Sharma, Rajesh"
] | Heterogeneity over Homogeneity: Investigating Multilingual Speech Pre-Trained Models for Detecting Audio Deepfake | findings-naacl.160 | Poster | 2404.00809 | [
"https://github.com/orchidchetiaphukan/multilingualptm_add_naacl24"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.161.bib | https://aclanthology.org/2024.findings-naacl.161/ | @inproceedings{yang-etal-2024-identifying,
title = "Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts",
author = "Yang, Chenghao and
Chakrabarty, Tuhin and
Hochstatter, Karli and
Slavin, Melissa and
El-Bassel, Nabila and
Muresan, Smaranda",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.161",
doi = "10.18653/v1/2024.findings-naacl.161",
pages = "2507--2521",
abstract = "In the last decade, the United States has lost more than 500,000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017). Medical practitioners require robust and timely tools that can effectively identify at-risk patients. Community-based social media platforms such as Reddit allow self-disclosure for users to discuss otherwise sensitive drug-related behaviors. We present a moderate size corpus of 2500 opioid-related posts from various subreddits labeled with six different phases of opioid use: Medical Use, Misuse, Addiction, Recovery, Relapse, Not Using. For every post, we annotate span-level extractive explanations and crucially study their role both in annotation quality and model development. We evaluate several state-of-the-art models in a supervised, few-shot, or zero-shot setting. Experimental results and error analysis show that identifying the phases of opioid use disorder is highly contextual and challenging. However, we find that using explanations during modeling leads to a significant boost in classification accuracy demonstrating their beneficial role in a high-stakes domain such as studying the opioid use disorder continuum.",
}
| In the last decade, the United States has lost more than 500,000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017). Medical practitioners require robust and timely tools that can effectively identify at-risk patients. Community-based social media platforms such as Reddit allow self-disclosure for users to discuss otherwise sensitive drug-related behaviors. We present a moderate size corpus of 2500 opioid-related posts from various subreddits labeled with six different phases of opioid use: Medical Use, Misuse, Addiction, Recovery, Relapse, Not Using. For every post, we annotate span-level extractive explanations and crucially study their role both in annotation quality and model development. We evaluate several state-of-the-art models in a supervised, few-shot, or zero-shot setting. Experimental results and error analysis show that identifying the phases of opioid use disorder is highly contextual and challenging. However, we find that using explanations during modeling leads to a significant boost in classification accuracy demonstrating their beneficial role in a high-stakes domain such as studying the opioid use disorder continuum. | [
"Yang, Chenghao",
"Chakrabarty, Tuhin",
"Hochstatter, Karli",
"Slavin, Melissa",
"El-Bassel, Nabila",
"Muresan, Smar",
"a"
] | Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts | findings-naacl.161 | Poster | 2311.09066 | [
"https://github.com/yangalan123/opioidid"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.162.bib | https://aclanthology.org/2024.findings-naacl.162/ | @inproceedings{han-etal-2024-self,
title = "Self-Adaptive Sampling for Accurate Video Question Answering on Image Text Models",
author = "Han, Wei and
Chen, Hui and
Kan, Min-Yen and
Poria, Soujanya",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.162",
doi = "10.18653/v1/2024.findings-naacl.162",
pages = "2522--2534",
abstract = "Image{--}text models (ITMs) is the prevalent architecture to solve video question{--}answering tasks, which requires only a few input frames to save huge computational cost compared to video{--}language models.However, we find existent ITM video question{--}answering solutions either 1) adopt simplistic and unintentional sampling strategies, which may miss key frames to offer the answer clues; or 2) sample a large number of frames into divided groups, which the computational sources can not accommodate. In this work, we aim at an efficient sampling method towards the few-frame situations.We first summarize a family of prior sampling methods based on question{--}frame correlation into a unified one, dubbed *Most Implied Frames* (MIF). Through some primary results and analysis, Through analysis, we form a hypothesis that question-aware sampling is not necessary, from which we further propose the other method *Most Dominant Frames* (MDF).Experimental results on four public datasets and three advanced ITMs demonstrate that our proposed strategies can boost the performance for image{--}text pretrained models, and have a wide application scenario in terms of model architectures and dataset types. Our code is available at https://github.com/declare-lab/Sealing\url{https://github.com/declare-lab/Sealing}.",
}
| Image{--}text models (ITMs) is the prevalent architecture to solve video question{--}answering tasks, which requires only a few input frames to save huge computational cost compared to video{--}language models.However, we find existent ITM video question{--}answering solutions either 1) adopt simplistic and unintentional sampling strategies, which may miss key frames to offer the answer clues; or 2) sample a large number of frames into divided groups, which the computational sources can not accommodate. In this work, we aim at an efficient sampling method towards the few-frame situations.We first summarize a family of prior sampling methods based on question{--}frame correlation into a unified one, dubbed *Most Implied Frames* (MIF). Through some primary results and analysis, Through analysis, we form a hypothesis that question-aware sampling is not necessary, from which we further propose the other method *Most Dominant Frames* (MDF).Experimental results on four public datasets and three advanced ITMs demonstrate that our proposed strategies can boost the performance for image{--}text pretrained models, and have a wide application scenario in terms of model architectures and dataset types. Our code is available at https://github.com/declare-lab/Sealing\url{https://github.com/declare-lab/Sealing}. | [
"Han, Wei",
"Chen, Hui",
"Kan, Min-Yen",
"Poria, Soujanya"
] | Self-Adaptive Sampling for Accurate Video Question Answering on Image Text Models | findings-naacl.162 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.163.bib | https://aclanthology.org/2024.findings-naacl.163/ | @inproceedings{zhu-etal-2024-towards,
title = "Towards an On-device Agent for Text Rewriting",
author = "Zhu, Yun and
Liu, Yinxiao and
Stahlberg, Felix and
Kumar, Shankar and
Chen, Yu-Hui and
Luo, Liangchen and
Shu, Lei and
Liu, Renjie and
Chen, Jindong and
Meng, Lei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.163",
doi = "10.18653/v1/2024.findings-naacl.163",
pages = "2535--2552",
abstract = "Large Language Models (LLMs) have demonstrated impressive capabilities for text rewriting. However creating a smaller yet potent language model for text rewriting presents two formidable challenges: costly data collection and absence of emergent capabilities.In this paper we present solutions to address the above challenges.We propose an new instruction tuning method to develop a mo-bile text rewriting model that leverages LLM-generated data and heuristic reinforcement learning, eliminating the need for human data collection. Moreover, to bridge the performance gap from the constraint size, we pro-pose a cascading approach based on the confidence levels which are distilled from the large server model{'}s critiques. To evaluate the text rewriting tasks for mobile scenarios, we introduce MessageRewriteEval, a human-labeled benchmark that focuses on text rewriting of messages through natural language instructions. Through empirical experiments, we demonstrate that our on-device model surpasses the current state-of-the-art LLMs in text rewriting while maintaining a significantly reduced model size using public benchmark EditEval and our new benchmark. We also demonstrate that our proposed cascading approach improves model performance further.",
}
| Large Language Models (LLMs) have demonstrated impressive capabilities for text rewriting. However creating a smaller yet potent language model for text rewriting presents two formidable challenges: costly data collection and absence of emergent capabilities.In this paper we present solutions to address the above challenges.We propose an new instruction tuning method to develop a mo-bile text rewriting model that leverages LLM-generated data and heuristic reinforcement learning, eliminating the need for human data collection. Moreover, to bridge the performance gap from the constraint size, we pro-pose a cascading approach based on the confidence levels which are distilled from the large server model{'}s critiques. To evaluate the text rewriting tasks for mobile scenarios, we introduce MessageRewriteEval, a human-labeled benchmark that focuses on text rewriting of messages through natural language instructions. Through empirical experiments, we demonstrate that our on-device model surpasses the current state-of-the-art LLMs in text rewriting while maintaining a significantly reduced model size using public benchmark EditEval and our new benchmark. We also demonstrate that our proposed cascading approach improves model performance further. | [
"Zhu, Yun",
"Liu, Yinxiao",
"Stahlberg, Felix",
"Kumar, Shankar",
"Chen, Yu-Hui",
"Luo, Liangchen",
"Shu, Lei",
"Liu, Renjie",
"Chen, Jindong",
"Meng, Lei"
] | Towards an On-device Agent for Text Rewriting | findings-naacl.163 | Poster | 2308.11807 | [
""
] | https://huggingface.co/papers/2308.11807 | 0 | 0 | 0 | 10 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.164.bib | https://aclanthology.org/2024.findings-naacl.164/ | @inproceedings{stureborg-etal-2024-tailoring,
title = "Tailoring Vaccine Messaging with Common-Ground Opinions",
author = "Stureborg, Rickard and
Chen, Sanxing and
Xie, Roy and
Patel, Aayushi and
Li, Christopher and
Zhu, Chloe and
Hu, Tingnan and
Yang, Jun and
Dhingra, Bhuwan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.164",
doi = "10.18653/v1/2024.findings-naacl.164",
pages = "2553--2575",
abstract = "One way to personalize chatbot interactions is by establishing common ground with the intended reader. A domain where establishing mutual understanding could be particularly impactful is vaccine concerns and misinformation. Vaccine interventions are forms of messaging which aim to answer concerns expressed about vaccination. Tailoring responses in this domain is difficult, since opinions often have seemingly little ideological overlap. We define the task of tailoring vaccine interventions to a Common-Ground Opinion (CGO). Tailoring responses to a CGO involves meaningfully improving the answer by relating it to an opinion or belief the reader holds. In this paper we introduce Tailor-CGO, a dataset for evaluating how well responses are tailored to provided CGOs. We benchmark several major LLMs on this task; finding GPT-4-Turbo performs significantly better than others. We also build automatic evaluation metrics, including an efficient and accurate BERT model that outperforms finetuned LLMs, investigate how to successfully tailor vaccine messaging to CGOs, and provide actionable recommendations from this investigation.Tailor-CGO dataset and code available at: https://github.com/rickardstureborg/tailor-cgo",
}
| One way to personalize chatbot interactions is by establishing common ground with the intended reader. A domain where establishing mutual understanding could be particularly impactful is vaccine concerns and misinformation. Vaccine interventions are forms of messaging which aim to answer concerns expressed about vaccination. Tailoring responses in this domain is difficult, since opinions often have seemingly little ideological overlap. We define the task of tailoring vaccine interventions to a Common-Ground Opinion (CGO). Tailoring responses to a CGO involves meaningfully improving the answer by relating it to an opinion or belief the reader holds. In this paper we introduce Tailor-CGO, a dataset for evaluating how well responses are tailored to provided CGOs. We benchmark several major LLMs on this task; finding GPT-4-Turbo performs significantly better than others. We also build automatic evaluation metrics, including an efficient and accurate BERT model that outperforms finetuned LLMs, investigate how to successfully tailor vaccine messaging to CGOs, and provide actionable recommendations from this investigation.Tailor-CGO dataset and code available at: https://github.com/rickardstureborg/tailor-cgo | [
"Stureborg, Rickard",
"Chen, Sanxing",
"Xie, Roy",
"Patel, Aayushi",
"Li, Christopher",
"Zhu, Chloe",
"Hu, Tingnan",
"Yang, Jun",
"Dhingra, Bhuwan"
] | Tailoring Vaccine Messaging with Common-Ground Opinions | findings-naacl.164 | Poster | 2405.10861 | [
"https://github.com/rickardstureborg/tailor-cgo"
] | https://huggingface.co/papers/2405.10861 | 0 | 0 | 0 | 9 | 1 | [] | [
"DukeNLP/tailor-cgo"
] | [] |
https://aclanthology.org/2024.findings-naacl.165.bib | https://aclanthology.org/2024.findings-naacl.165/ | @inproceedings{vacareanu-etal-2024-best,
title = "Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification",
author = "Vacareanu, Robert and
Alam, Fahmida and
Islam, Md Asiful and
Riaz, Haris and
Surdeanu, Mihai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.165",
doi = "10.18653/v1/2024.findings-naacl.165",
pages = "2576--2594",
abstract = "This paper introduces a novel neuro-symbolic architecture for relation classification (RC) that combines rule-based methods with contemporary deep learning techniques. This approach capitalizes on the strengths of both paradigms: the adaptability of rule-based systems and the generalization power of neural networks. Our architecture consists of two components: a declarative rule-based model for transparent classification and a neural component to enhance rule generalizability through semantic text matching.Notably, our semantic matcher is trained in an unsupervised domain-agnostic way, solely with synthetic data.Further, these components are loosely coupled, allowing for rule modifications without retraining the semantic matcher.In our evaluation, we focused on two few-shot relation classification datasets: Few-Shot TACRED and a Few-Shot version of NYT29. We show that our proposed method outperforms previous state-of-the-art models in three out of four settings, despite not seeing any human-annotated training data.Further, we show that our approach remains modular and pliable, i.e., the corresponding rules can be locally modified to improve the overall model. Human interventions to the rules for the TACRED relation org:parents boost the performance on that relation by as much as 26{\%} relative improvement, without negatively impacting the other relations, and without retraining the semantic matching component.",
}
| This paper introduces a novel neuro-symbolic architecture for relation classification (RC) that combines rule-based methods with contemporary deep learning techniques. This approach capitalizes on the strengths of both paradigms: the adaptability of rule-based systems and the generalization power of neural networks. Our architecture consists of two components: a declarative rule-based model for transparent classification and a neural component to enhance rule generalizability through semantic text matching.Notably, our semantic matcher is trained in an unsupervised domain-agnostic way, solely with synthetic data.Further, these components are loosely coupled, allowing for rule modifications without retraining the semantic matcher.In our evaluation, we focused on two few-shot relation classification datasets: Few-Shot TACRED and a Few-Shot version of NYT29. We show that our proposed method outperforms previous state-of-the-art models in three out of four settings, despite not seeing any human-annotated training data.Further, we show that our approach remains modular and pliable, i.e., the corresponding rules can be locally modified to improve the overall model. Human interventions to the rules for the TACRED relation org:parents boost the performance on that relation by as much as 26{\%} relative improvement, without negatively impacting the other relations, and without retraining the semantic matching component. | [
"Vacareanu, Robert",
"Alam, Fahmida",
"Islam, Md Asiful",
"Riaz, Haris",
"Surdeanu, Mihai"
] | Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification | findings-naacl.165 | Poster | 2403.03305 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.166.bib | https://aclanthology.org/2024.findings-naacl.166/ | @inproceedings{guo-etal-2024-q,
title = "{Q}-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning",
author = "Guo, Yanhui and
Xu, Shaoyuan and
Fu, Jinmiao and
Liu, Jia and
Dong, Chaosheng and
Wang, Bryan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.166",
doi = "10.18653/v1/2024.findings-naacl.166",
pages = "2595--2622",
abstract = "This paper introduces Q-tuning, a novel approach for continual prompt tuning that enables the lifelong learning of a pre-trained language model. When learning a new task, Q-tuning trains a task-specific prompt by adding it to a prompt queue consisting of the prompts from older tasks. To better transfer the knowledge of old tasks, we design an adaptive knowledge aggregation technique that reweighs previous prompts in the queue with a learnable low-rank matrix. Once the prompt queue reaches its maximum capacity, we leverage a PCA-based eviction rule to reduce the queue{'}s size, allowing the newly trained prompt to be added while preserving the primary knowledge of old tasks. In order to mitigate the accumulation of information loss caused by the eviction, we additionally propose a globally shared prefix prompt and a memory retention regularization based on information theory. Extensive experiments demonstrate that our approach outperforms the state-of-the-art methods substantially on continual prompt tuning benchmarks. Moreover, our approach enables lifelong learning on linearly growing task sequences while requiring constant complexity for training and inference.",
}
| This paper introduces Q-tuning, a novel approach for continual prompt tuning that enables the lifelong learning of a pre-trained language model. When learning a new task, Q-tuning trains a task-specific prompt by adding it to a prompt queue consisting of the prompts from older tasks. To better transfer the knowledge of old tasks, we design an adaptive knowledge aggregation technique that reweighs previous prompts in the queue with a learnable low-rank matrix. Once the prompt queue reaches its maximum capacity, we leverage a PCA-based eviction rule to reduce the queue{'}s size, allowing the newly trained prompt to be added while preserving the primary knowledge of old tasks. In order to mitigate the accumulation of information loss caused by the eviction, we additionally propose a globally shared prefix prompt and a memory retention regularization based on information theory. Extensive experiments demonstrate that our approach outperforms the state-of-the-art methods substantially on continual prompt tuning benchmarks. Moreover, our approach enables lifelong learning on linearly growing task sequences while requiring constant complexity for training and inference. | [
"Guo, Yanhui",
"Xu, Shaoyuan",
"Fu, Jinmiao",
"Liu, Jia",
"Dong, Chaosheng",
"Wang, Bryan"
] | Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning | findings-naacl.166 | Poster | 2404.14607 | [
""
] | https://huggingface.co/papers/2404.14607 | 0 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.167.bib | https://aclanthology.org/2024.findings-naacl.167/ | @inproceedings{xu-etal-2024-context,
title = "In-Context Example Ordering Guided by Label Distributions",
author = "Xu, Zhichao and
Cohen, Daniel and
Wang, Bei and
Srikumar, Vivek",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.167",
doi = "10.18653/v1/2024.findings-naacl.167",
pages = "2623--2640",
abstract = "By allowing models to predict without task-specific training, in-context learning (ICL) with pretrained LLMs has enormous potential in NLP. However, a number of problems persist in ICL. In particular, its performance is sensitive to the choice and order of in-context examples. Given the same set of in-context examples with different orderings, model performance may vary from near random to near state-of-the-art. In this work, we formulate in-context example ordering as an optimization problem. We examine three problem settings that differ in the assumptions they make about what is known about the task. Inspired by the idea of learning from label proportions, we propose two principles for in-context example ordering guided by model{'}s probability predictions. We apply our proposed principles to thirteen text classification datasets and nine different autoregressive LLMs with 700M to 13B parameters. We demonstrate that our approach outperforms the baselines by improving the classification accuracy, reducing model miscalibration, and also by selecting better in-context examples.",
}
| By allowing models to predict without task-specific training, in-context learning (ICL) with pretrained LLMs has enormous potential in NLP. However, a number of problems persist in ICL. In particular, its performance is sensitive to the choice and order of in-context examples. Given the same set of in-context examples with different orderings, model performance may vary from near random to near state-of-the-art. In this work, we formulate in-context example ordering as an optimization problem. We examine three problem settings that differ in the assumptions they make about what is known about the task. Inspired by the idea of learning from label proportions, we propose two principles for in-context example ordering guided by model{'}s probability predictions. We apply our proposed principles to thirteen text classification datasets and nine different autoregressive LLMs with 700M to 13B parameters. We demonstrate that our approach outperforms the baselines by improving the classification accuracy, reducing model miscalibration, and also by selecting better in-context examples. | [
"Xu, Zhichao",
"Cohen, Daniel",
"Wang, Bei",
"Srikumar, Vivek"
] | In-Context Example Ordering Guided by Label Distributions | findings-naacl.167 | Poster | 2402.11447 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.168.bib | https://aclanthology.org/2024.findings-naacl.168/ | @inproceedings{liu-etal-2024-beyond,
title = "Beyond Surface Similarity: Detecting Subtle Semantic Shifts in Financial Narratives",
author = "Liu, Jiaxin and
Yang, Yi and
Tam, Kar Yan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.168",
doi = "10.18653/v1/2024.findings-naacl.168",
pages = "2641--2652",
abstract = "In this paper, we introduce the Financial-STS task, a financial domain-specific NLP task designed to measure the nuanced semantic similarity between pairs of financial narratives. These narratives originate from the financial statements of the same company but correspond to different periods, such as year-over-year comparisons. Measuring the subtle semantic differences between these paired narratives enables market stakeholders to gauge changes over time in the company{'}s financial and operational situations, which is critical for financial decision-making. We find that existing pretrained embedding models and LLM embeddings fall short in discerning these subtle financial narrative shifts. To address this gap, we propose an LLM-augmented pipeline specifically designed for the Financial-STS task. Evaluation on a human-annotated dataset demonstrates that our proposed method outperforms existing methods trained on classic STS tasks and generic LLM embeddings.",
}
| In this paper, we introduce the Financial-STS task, a financial domain-specific NLP task designed to measure the nuanced semantic similarity between pairs of financial narratives. These narratives originate from the financial statements of the same company but correspond to different periods, such as year-over-year comparisons. Measuring the subtle semantic differences between these paired narratives enables market stakeholders to gauge changes over time in the company{'}s financial and operational situations, which is critical for financial decision-making. We find that existing pretrained embedding models and LLM embeddings fall short in discerning these subtle financial narrative shifts. To address this gap, we propose an LLM-augmented pipeline specifically designed for the Financial-STS task. Evaluation on a human-annotated dataset demonstrates that our proposed method outperforms existing methods trained on classic STS tasks and generic LLM embeddings. | [
"Liu, Jiaxin",
"Yang, Yi",
"Tam, Kar Yan"
] | Beyond Surface Similarity: Detecting Subtle Semantic Shifts in Financial Narratives | findings-naacl.168 | Poster | 2403.14341 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.169.bib | https://aclanthology.org/2024.findings-naacl.169/ | @inproceedings{sharma-etal-2024-laying,
title = "Laying Anchors: Semantically Priming Numerals in Language Modeling",
author = "Sharma, Mandar and
Taware, Rutuja and
Koirala, Pravesh and
Muralidhar, Nikhil and
Ramakrishnan, Naren",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.169",
doi = "10.18653/v1/2024.findings-naacl.169",
pages = "2653--2660",
abstract = "Off-the-shelf pre-trained language models have become the de facto standard in NLP pipelines for a multitude of downstream tasks. However, the inability of these models to properly encode numerals limits their performance on tasks requiring numeric comprehension. We introduce strategies to semantically prime numerals in any corpus by generating anchors governed by the distribution of numerals in said corpus, thereby enabling mathematically grounded representations of these numeral tokens. We establish the superiority of our proposed techniques through evaluation on a range of numeracy tasks for both in-domain (seen) and out-domain (unseen) numerals. Further, we expand our empirical evaluations to numerals ranging from 1 to 10 billion, a significantly broader range compared to previous studies of the same nature, and we demonstrate significant improvements in the mathematical grounding of our learned embeddings.",
}
| Off-the-shelf pre-trained language models have become the de facto standard in NLP pipelines for a multitude of downstream tasks. However, the inability of these models to properly encode numerals limits their performance on tasks requiring numeric comprehension. We introduce strategies to semantically prime numerals in any corpus by generating anchors governed by the distribution of numerals in said corpus, thereby enabling mathematically grounded representations of these numeral tokens. We establish the superiority of our proposed techniques through evaluation on a range of numeracy tasks for both in-domain (seen) and out-domain (unseen) numerals. Further, we expand our empirical evaluations to numerals ranging from 1 to 10 billion, a significantly broader range compared to previous studies of the same nature, and we demonstrate significant improvements in the mathematical grounding of our learned embeddings. | [
"Sharma, M",
"ar",
"Taware, Rutuja",
"Koirala, Pravesh",
"Muralidhar, Nikhil",
"Ramakrishnan, Naren"
] | Laying Anchors: Semantically Priming Numerals in Language Modeling | findings-naacl.169 | Poster | 2404.01536 | [
"https://github.com/mandar-sharma/laying-anchors"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.170.bib | https://aclanthology.org/2024.findings-naacl.170/ | @inproceedings{mou-etal-2024-uegp,
title = "{UEGP}: Unified Expert-Guided Pre-training for Knowledge Rekindle",
author = "Mou, Yutao and
Wang, Kexiang and
Lin, Jianhe and
Ma, Dehong and
Fan, Jun and
Shi, Daiting and
Cheng, Zhicong and
Simiu, Gu and
Yin, Dawei and
Xu, Weiran",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.170",
doi = "10.18653/v1/2024.findings-naacl.170",
pages = "2661--2673",
abstract = "Pre-training and fine-tuning framework has become the standard training paradigm for NLP tasks and is also widely used in industrial-level applications. However, there are still a limitation with this paradigm: simply fine-tuning with task-specific objectives tends to converge to local minima, resulting in a sub-optimal performance. In this paper, we first propose a new paradigm: knowledge rekindle, which aims to re-incorporate the fine-tuned expert model into the training cycle and break through the performance upper bounds of experts without introducing additional annotated data. Then we further propose a unified expert-guided pre-training (UEGP) framework for knowledge rekindle. Specifically, we reuse fine-tuned expert models for various downstream tasks as knowledge sources and inject task-specific prior knowledge to pre-trained language models (PLMs) by means of knowledge distillation. In this process, we perform multi-task learning with knowledge distillation and masked language modeling (MLM) objectives. We also further explored whether mixture-of-expert guided pre-training (MoEGP) can further enhance the effect of knowledge rekindle. Experiments and analysis on eight datasets in GLUE benchmark and a industrial-level search re-ranking dataset show the effectiveness of our method.",
}
| Pre-training and fine-tuning framework has become the standard training paradigm for NLP tasks and is also widely used in industrial-level applications. However, there are still a limitation with this paradigm: simply fine-tuning with task-specific objectives tends to converge to local minima, resulting in a sub-optimal performance. In this paper, we first propose a new paradigm: knowledge rekindle, which aims to re-incorporate the fine-tuned expert model into the training cycle and break through the performance upper bounds of experts without introducing additional annotated data. Then we further propose a unified expert-guided pre-training (UEGP) framework for knowledge rekindle. Specifically, we reuse fine-tuned expert models for various downstream tasks as knowledge sources and inject task-specific prior knowledge to pre-trained language models (PLMs) by means of knowledge distillation. In this process, we perform multi-task learning with knowledge distillation and masked language modeling (MLM) objectives. We also further explored whether mixture-of-expert guided pre-training (MoEGP) can further enhance the effect of knowledge rekindle. Experiments and analysis on eight datasets in GLUE benchmark and a industrial-level search re-ranking dataset show the effectiveness of our method. | [
"Mou, Yutao",
"Wang, Kexiang",
"Lin, Jianhe",
"Ma, Dehong",
"Fan, Jun",
"Shi, Daiting",
"Cheng, Zhicong",
"Simiu, Gu",
"Yin, Dawei",
"Xu, Weiran"
] | UEGP: Unified Expert-Guided Pre-training for Knowledge Rekindle | findings-naacl.170 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.171.bib | https://aclanthology.org/2024.findings-naacl.171/ | @inproceedings{zhang-etal-2024-latticegen,
title = "{L}attice{G}en: Hiding Generated Text in a Lattice for Privacy-Aware Large Language Model Generation on Cloud",
author = "Zhang, Mengke and
He, Tianxing and
Wang, Tianle and
Mi, Lu and
Mireshghallah, Niloofar and
Chen, Binyi and
Wang, Hao and
Tsvetkov, Yulia",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.171",
doi = "10.18653/v1/2024.findings-naacl.171",
pages = "2674--2690",
abstract = "In the current user-server interaction paradigm of prompted generation with large language models (LLMs) on cloud, the server fully controls the generation process, which leaves zero options for users who want to keep the generated text private to themselves. For privacy-aware text generation on cloud, we propose LatticeGen, a cooperative protocol in which the server still handles most of the computation while the client controls the sampling operation. The key idea is that the true generated sequence is mixed with noise tokens by the client and hidden in a noised lattice. Only the client knows which tokens are the true ones. Considering potential attacks from a hypothetically malicious server and how the client can defend against it, we propose the repeated beam-search attack and the mixing noise scheme. In our experiments we apply LatticeGen to protect both prompt and generation. It is shown that while the noised lattice degrades generation quality, LatticeGen successfully protects the true generation to a remarkable degree under strong attacks (more than 50{\%} of the semantic remains hidden as measured by BERTScore).",
}
| In the current user-server interaction paradigm of prompted generation with large language models (LLMs) on cloud, the server fully controls the generation process, which leaves zero options for users who want to keep the generated text private to themselves. For privacy-aware text generation on cloud, we propose LatticeGen, a cooperative protocol in which the server still handles most of the computation while the client controls the sampling operation. The key idea is that the true generated sequence is mixed with noise tokens by the client and hidden in a noised lattice. Only the client knows which tokens are the true ones. Considering potential attacks from a hypothetically malicious server and how the client can defend against it, we propose the repeated beam-search attack and the mixing noise scheme. In our experiments we apply LatticeGen to protect both prompt and generation. It is shown that while the noised lattice degrades generation quality, LatticeGen successfully protects the true generation to a remarkable degree under strong attacks (more than 50{\%} of the semantic remains hidden as measured by BERTScore). | [
"Zhang, Mengke",
"He, Tianxing",
"Wang, Tianle",
"Mi, Lu",
"Mireshghallah, Niloofar",
"Chen, Binyi",
"Wang, Hao",
"Tsvetkov, Yulia"
] | LatticeGen: Hiding Generated Text in a Lattice for Privacy-Aware Large Language Model Generation on Cloud | findings-naacl.171 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.172.bib | https://aclanthology.org/2024.findings-naacl.172/ | @inproceedings{zheng-etal-2024-hatemoderate,
title = "{H}ate{M}oderate: Testing Hate Speech Detectors against Content Moderation Policies",
author = "Zheng, Jiangrui and
Liu, Xueqing and
Haque, Mirazul and
Qian, Xing and
Yang, Guanqun and
Yang, Wei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.172",
doi = "10.18653/v1/2024.findings-naacl.172",
pages = "2691--2710",
abstract = "To protect users from massive hateful content, existing works studied automated hate speech detection. Despite the existing efforts, one question remains: Do automated hate speech detectors conform to social media content policies? A platform{'}s content policies are a checklist of content moderated by the social media platform. Because content moderation rules are often uniquely defined, existing hate speech datasets cannot directly answer this question. This work seeks to answer this question by creating HateModerate, a dataset for testing the behaviors of automated content moderators against content policies. First, we engage 28 annotators and GPT in a six-step annotation process, resulting in a list of hateful and non-hateful test suites matching each of Facebook{'}s 41 hate speech policies. Second, we test the performance of state-of-the-art hate speech detectors against HateModerate, revealing substantial failures these models have in their conformity to the policies. Third, using HateModerate, we augment the training data of a top-downloaded hate detector on HuggingFace. We observe significant improvement in the models{'} conformity to content policies while having comparable scores on the original test data. Our dataset and code can be found on https://github.com/stevens-textmining/HateModerate.",
}
| To protect users from massive hateful content, existing works studied automated hate speech detection. Despite the existing efforts, one question remains: Do automated hate speech detectors conform to social media content policies? A platform{'}s content policies are a checklist of content moderated by the social media platform. Because content moderation rules are often uniquely defined, existing hate speech datasets cannot directly answer this question. This work seeks to answer this question by creating HateModerate, a dataset for testing the behaviors of automated content moderators against content policies. First, we engage 28 annotators and GPT in a six-step annotation process, resulting in a list of hateful and non-hateful test suites matching each of Facebook{'}s 41 hate speech policies. Second, we test the performance of state-of-the-art hate speech detectors against HateModerate, revealing substantial failures these models have in their conformity to the policies. Third, using HateModerate, we augment the training data of a top-downloaded hate detector on HuggingFace. We observe significant improvement in the models{'} conformity to content policies while having comparable scores on the original test data. Our dataset and code can be found on https://github.com/stevens-textmining/HateModerate. | [
"Zheng, Jiangrui",
"Liu, Xueqing",
"Haque, Mirazul",
"Qian, Xing",
"Yang, Guanqun",
"Yang, Wei"
] | HateModerate: Testing Hate Speech Detectors against Content Moderation Policies | findings-naacl.172 | Poster | 2307.12418 | [
"https://github.com/stevens-textmining/hatemoderate"
] | https://huggingface.co/papers/2307.12418 | 0 | 0 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.173.bib | https://aclanthology.org/2024.findings-naacl.173/ | @inproceedings{gao-etal-2024-compensate,
title = "Compensate Quantization Errors: Make Weights Hierarchical to Compensate Each Other",
author = "Gao, Yifei and
Ou, Jie and
Wang, Lei and
Xiao, Yuting and
Xiangzhiyuan, Xiangzhiyuan and
Dai, Ruiting and
Cheng, Jun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.173",
doi = "10.18653/v1/2024.findings-naacl.173",
pages = "2711--2722",
abstract = "Emergent Large Language Models (LLMs) use their extraordinary performance and powerful deduction capacity to discern from traditional language models. However, the expenses of computational resources and storage for these LLMs are stunning, quantization then arises as a trending conversation. To address accuracy decay caused by quantization, two streams of works in post-training quantization methods stand out. One uses other weights to compensate existing quantization error, while the other transfers the quantization difficulty to other parts in the model. Combining both merits, we introduce Learnable Singular value Increment (LSI) as an advanced solution. LSI uses Singular Value Decomposition to extract singular values of the weights and make them learnable to help weights compensate each other conditioned on activation. Incorporating LSI with existing techniques, we achieve state-of-the-art performance in diverse quantization settings, no matter in weight-only, weight-activation or extremely low bit scenarios. By unleashing the potential of LSI, efficient finetuning on quantized model is no longer a prohibitive problem.",
}
| Emergent Large Language Models (LLMs) use their extraordinary performance and powerful deduction capacity to discern from traditional language models. However, the expenses of computational resources and storage for these LLMs are stunning, quantization then arises as a trending conversation. To address accuracy decay caused by quantization, two streams of works in post-training quantization methods stand out. One uses other weights to compensate existing quantization error, while the other transfers the quantization difficulty to other parts in the model. Combining both merits, we introduce Learnable Singular value Increment (LSI) as an advanced solution. LSI uses Singular Value Decomposition to extract singular values of the weights and make them learnable to help weights compensate each other conditioned on activation. Incorporating LSI with existing techniques, we achieve state-of-the-art performance in diverse quantization settings, no matter in weight-only, weight-activation or extremely low bit scenarios. By unleashing the potential of LSI, efficient finetuning on quantized model is no longer a prohibitive problem. | [
"Gao, Yifei",
"Ou, Jie",
"Wang, Lei",
"Xiao, Yuting",
"Xiangzhiyuan, Xiangzhiyuan",
"Dai, Ruiting",
"Cheng, Jun"
] | Compensate Quantization Errors: Make Weights Hierarchical to Compensate Each Other | findings-naacl.173 | Poster | 2406.16299 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.174.bib | https://aclanthology.org/2024.findings-naacl.174/ | @inproceedings{he-etal-2024-contrastive,
title = "Contrastive Preference Learning for Neural Machine Translation",
author = "He, Jianfei and
Sun, Shichao and
Peng, Sen and
Xu, Jie and
Jia, Xiaohua and
Li, Wenjie",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.174",
doi = "10.18653/v1/2024.findings-naacl.174",
pages = "2723--2735",
abstract = "There exists a discrepancy between the token-level objective during training and the overall sequence-level quality that is expected from the model. This discrepancy leads to issues like exposure bias.To align the model with human expectations, sequence-level objectives are often used to fine-tune pre-trained models.In this paper, we introduce a contrastive preference model that enhances the traditional Plackett-Luce model by incorporating an indicator function. Building upon this novel preference model, we propose Contrastive Preference Learning (CPL), which uses offline samples with list-wise preferences to fine-tune a pre-trained model in Neural Machine Translation. Our experiments, conducted on three language pairs, demonstrate that CPL outperforms not only the vanilla Transformer model but also other token-level and sequence-level baselines. Furthermore, the ablation study highlights the essential role of the proposed indicator function in achieving this improvement.",
}
| There exists a discrepancy between the token-level objective during training and the overall sequence-level quality that is expected from the model. This discrepancy leads to issues like exposure bias.To align the model with human expectations, sequence-level objectives are often used to fine-tune pre-trained models.In this paper, we introduce a contrastive preference model that enhances the traditional Plackett-Luce model by incorporating an indicator function. Building upon this novel preference model, we propose Contrastive Preference Learning (CPL), which uses offline samples with list-wise preferences to fine-tune a pre-trained model in Neural Machine Translation. Our experiments, conducted on three language pairs, demonstrate that CPL outperforms not only the vanilla Transformer model but also other token-level and sequence-level baselines. Furthermore, the ablation study highlights the essential role of the proposed indicator function in achieving this improvement. | [
"He, Jianfei",
"Sun, Shichao",
"Peng, Sen",
"Xu, Jie",
"Jia, Xiaohua",
"Li, Wenjie"
] | Contrastive Preference Learning for Neural Machine Translation | findings-naacl.174 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.175.bib | https://aclanthology.org/2024.findings-naacl.175/ | @inproceedings{he-etal-2024-socreval,
title = "{S}oc{RE}val: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation",
author = "He, Hangfeng and
Zhang, Hongming and
Roth, Dan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.175",
doi = "10.18653/v1/2024.findings-naacl.175",
pages = "2736--2764",
abstract = "To comprehensively gauge the capacity of current models for complex reasoning, it is crucial to assess their step-by-step reasoning in a scalable manner. Established reference-based evaluation metrics rely on human-annotated reasoning chains as references to assess the model-derived chains. However, such {``}gold-standard{''} human-written reasoning chains may not be unique and their acquisition is often labor-intensive. Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived chains before evaluation, complicating the process and questioning their adaptability to other datasets. To address these challenges, we harness GPT-4 to automatically evaluate reasoning chain quality, thereby removing the dependency on human-written reasoning chains for both model fine-tuning and evaluative purposes. Leveraging the Socratic method, we develop SocREval (**Soc**ratic Method-Inspired **R**easoning **Eval**uation), a novel approach for prompt design in reference-free reasoning evaluation. Empirical results from four human annotated datasets reveal that SocREval significantly improves GPT-4{'}s performance, surpassing existing reference-free and reference-based reasoning evaluation metrics. Beyond its demonstrated efficacy, SocREval, proves to be both cost-efficient and robust to prompt writing and example selection, as substantiated by our in-depth analysis.",
}
| To comprehensively gauge the capacity of current models for complex reasoning, it is crucial to assess their step-by-step reasoning in a scalable manner. Established reference-based evaluation metrics rely on human-annotated reasoning chains as references to assess the model-derived chains. However, such {``}gold-standard{''} human-written reasoning chains may not be unique and their acquisition is often labor-intensive. Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived chains before evaluation, complicating the process and questioning their adaptability to other datasets. To address these challenges, we harness GPT-4 to automatically evaluate reasoning chain quality, thereby removing the dependency on human-written reasoning chains for both model fine-tuning and evaluative purposes. Leveraging the Socratic method, we develop SocREval (**Soc**ratic Method-Inspired **R**easoning **Eval**uation), a novel approach for prompt design in reference-free reasoning evaluation. Empirical results from four human annotated datasets reveal that SocREval significantly improves GPT-4{'}s performance, surpassing existing reference-free and reference-based reasoning evaluation metrics. Beyond its demonstrated efficacy, SocREval, proves to be both cost-efficient and robust to prompt writing and example selection, as substantiated by our in-depth analysis. | [
"He, Hangfeng",
"Zhang, Hongming",
"Roth, Dan"
] | SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation | findings-naacl.175 | Poster | 2310.00074 | [
"https://github.com/hornhehhf/socreval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.176.bib | https://aclanthology.org/2024.findings-naacl.176/ | @inproceedings{zhu-etal-2024-multilingual,
title = "Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis",
author = "Zhu, Wenhao and
Liu, Hongyi and
Dong, Qingxiu and
Xu, Jingjing and
Huang, Shujian and
Kong, Lingpeng and
Chen, Jiajun and
Li, Lei",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.176",
doi = "10.18653/v1/2024.findings-naacl.176",
pages = "2765--2781",
abstract = "Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT). In this paper, we systematically investigate the advantages and challenges of LLMs for MMT by answering two questions: 1) How well do LLMs perform in translating massive languages? 2) Which factors affect LLMs{'} performance in translation? We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4. Our empirical results show that translation capabilities of LLMs are continually involving. GPT-4 has beat the strong supervised baseline NLLB in 40.91{\%} of translation directions but still faces a large gap towards the commercial translation system like Google Translate, especially on low-resource languages. Through further analysis, we discover that LLMs exhibit new working patterns when used for MMT. First, LLM can acquire translation ability in a resource-efficient way and generate moderate translation even on zero-resource languages. Second, instruction semantics can surprisingly be ignored when given in-context exemplars. Third, cross-lingual exemplars can provide better task guidance for low-resource translation than exemplars in the same language pairs. Code will be released at: https://github.com/NJUNLP/MMT-LLM.",
}
| Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT). In this paper, we systematically investigate the advantages and challenges of LLMs for MMT by answering two questions: 1) How well do LLMs perform in translating massive languages? 2) Which factors affect LLMs{'} performance in translation? We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4. Our empirical results show that translation capabilities of LLMs are continually involving. GPT-4 has beat the strong supervised baseline NLLB in 40.91{\%} of translation directions but still faces a large gap towards the commercial translation system like Google Translate, especially on low-resource languages. Through further analysis, we discover that LLMs exhibit new working patterns when used for MMT. First, LLM can acquire translation ability in a resource-efficient way and generate moderate translation even on zero-resource languages. Second, instruction semantics can surprisingly be ignored when given in-context exemplars. Third, cross-lingual exemplars can provide better task guidance for low-resource translation than exemplars in the same language pairs. Code will be released at: https://github.com/NJUNLP/MMT-LLM. | [
"Zhu, Wenhao",
"Liu, Hongyi",
"Dong, Qingxiu",
"Xu, Jingjing",
"Huang, Shujian",
"Kong, Lingpeng",
"Chen, Jiajun",
"Li, Lei"
] | Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis | findings-naacl.176 | Poster | 2304.04675 | [
"https://github.com/owennju/mmt-llm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.177.bib | https://aclanthology.org/2024.findings-naacl.177/ | @inproceedings{liu-etal-2024-unleashing,
title = "Unleashing the Power of {LLM}s in Court View Generation by Stimulating Internal Knowledge and Incorporating External Knowledge",
author = "Liu, Yifei and
Wu, Yiquan and
Li, Ang and
Zhang, Yating and
Sun, Changlong and
Lu, Weiming and
Wu, Fei and
Kuang, Kun",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.177",
doi = "10.18653/v1/2024.findings-naacl.177",
pages = "2782--2792",
abstract = "Court View Generation (CVG) plays a vital role in the realm of legal artificial intelligence, which aims to support judges in crafting legal judgment documents. The court view consists of three essential judgment parts: the charge-related, law article-related, and prison term-related parts, each requiring specialized legal knowledge, rendering CVG a challenging task.Although Large Language Models (LLMs) have made remarkable strides in language generation, they encounter difficulties in the knowledge-intensive legal domain.Actually, there can be two types of knowledge: internal knowledge stored within LLMs{'} parameters and external knowledge sourced from legal documents outside the models.In this paper, we decompose court views into different parts, stimulate internal knowledge, and incorporate external information to unleash the power of LLMs in the CVG task.To validate our method, we conduct a series of experiment results on two real-world datasets LAIC2021 and CJO2022. The experiments demonstrate that our method is capable of generating more accurate and reliable court views.",
}
| Court View Generation (CVG) plays a vital role in the realm of legal artificial intelligence, which aims to support judges in crafting legal judgment documents. The court view consists of three essential judgment parts: the charge-related, law article-related, and prison term-related parts, each requiring specialized legal knowledge, rendering CVG a challenging task.Although Large Language Models (LLMs) have made remarkable strides in language generation, they encounter difficulties in the knowledge-intensive legal domain.Actually, there can be two types of knowledge: internal knowledge stored within LLMs{'} parameters and external knowledge sourced from legal documents outside the models.In this paper, we decompose court views into different parts, stimulate internal knowledge, and incorporate external information to unleash the power of LLMs in the CVG task.To validate our method, we conduct a series of experiment results on two real-world datasets LAIC2021 and CJO2022. The experiments demonstrate that our method is capable of generating more accurate and reliable court views. | [
"Liu, Yifei",
"Wu, Yiquan",
"Li, Ang",
"Zhang, Yating",
"Sun, Changlong",
"Lu, Weiming",
"Wu, Fei",
"Kuang, Kun"
] | Unleashing the Power of LLMs in Court View Generation by Stimulating Internal Knowledge and Incorporating External Knowledge | findings-naacl.177 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.178.bib | https://aclanthology.org/2024.findings-naacl.178/ | @inproceedings{guo-etal-2024-prompting,
title = "Prompting Vision-Language Models For Aspect-Controlled Generation of Referring Expressions",
author = "Guo, Danfeng and
Agarwal, Sanchit and
Gupta, Arpit and
Kao, Jiun-Yu and
Barut, Emre and
Chung, Tagyoung and
Huang, Jing and
Bansal, Mohit",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.178",
doi = "10.18653/v1/2024.findings-naacl.178",
pages = "2793--2807",
abstract = "Referring Expression Generation (REG) is the task of generating a description that unambiguously identifies a given target in the scene. Different from Image Captioning (IC), REG requires learning fine-grained characteristics of not only the scene objects but also their surrounding context. Referring expressions are usually not singular; an object can often be uniquely referenced in numerous ways, for instance, by color, by location, or by relationship with other objects. Most prior works, however, have not explored this {`}aspect-based multiplicity{'} of referring expressions. Hence, in this work, we focus on the Aspect-Controlled REG task, which requires generating a referring expression conditioned on the input aspect(s), where an aspect captures a style of reference. By changing the input aspect such as color, location, action etc., one can generate multiple distinct expressions per target region. To solve this new task, we first modify BLIP for aligning image-regions and text-expressions. We achieve this through a novel approach for feeding the input by drawing a bounding box around the target image-region and prompting the model to generate the referring expression. Our base REG model already beats all prior works in CIDEr score. To tackle Aspect-Controlled REG, we append {`}aspect tokens{'} to the prompt and show that distinct expressions can be generated by just changing the prompt. Finally, to prove the high-quality and diversity of the data generated by our proposed aspect-controlled REG model, we also perform data-augmentation-based evaluation on the downstream Referring Expression Comprehension (REC) task. With just half of the real data augmented with the generated synthetic data, we achieve performance comparable to training with 100{\%} of real data, using a SOTA REC model.",
}
| Referring Expression Generation (REG) is the task of generating a description that unambiguously identifies a given target in the scene. Different from Image Captioning (IC), REG requires learning fine-grained characteristics of not only the scene objects but also their surrounding context. Referring expressions are usually not singular; an object can often be uniquely referenced in numerous ways, for instance, by color, by location, or by relationship with other objects. Most prior works, however, have not explored this {`}aspect-based multiplicity{'} of referring expressions. Hence, in this work, we focus on the Aspect-Controlled REG task, which requires generating a referring expression conditioned on the input aspect(s), where an aspect captures a style of reference. By changing the input aspect such as color, location, action etc., one can generate multiple distinct expressions per target region. To solve this new task, we first modify BLIP for aligning image-regions and text-expressions. We achieve this through a novel approach for feeding the input by drawing a bounding box around the target image-region and prompting the model to generate the referring expression. Our base REG model already beats all prior works in CIDEr score. To tackle Aspect-Controlled REG, we append {`}aspect tokens{'} to the prompt and show that distinct expressions can be generated by just changing the prompt. Finally, to prove the high-quality and diversity of the data generated by our proposed aspect-controlled REG model, we also perform data-augmentation-based evaluation on the downstream Referring Expression Comprehension (REC) task. With just half of the real data augmented with the generated synthetic data, we achieve performance comparable to training with 100{\%} of real data, using a SOTA REC model. | [
"Guo, Danfeng",
"Agarwal, Sanchit",
"Gupta, Arpit",
"Kao, Jiun-Yu",
"Barut, Emre",
"Chung, Tagyoung",
"Huang, Jing",
"Bansal, Mohit"
] | Prompting Vision-Language Models For Aspect-Controlled Generation of Referring Expressions | findings-naacl.178 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.179.bib | https://aclanthology.org/2024.findings-naacl.179/ | @inproceedings{lyu-etal-2024-task,
title = "Task-Agnostic Detector for Insertion-Based Backdoor Attacks",
author = "Lyu, Weimin and
Lin, Xiao and
Zheng, Songzhu and
Pang, Lu and
Ling, Haibin and
Jha, Susmit and
Chen, Chao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.179",
doi = "10.18653/v1/2024.findings-naacl.179",
pages = "2808--2822",
abstract = "Textual backdoor attacks pose significant security threats. Current detection approaches, typically relying on intermediate feature representation or reconstructing potential triggers, are task-specific and less effective beyond sentence classification, struggling with tasks like question answering and named entity recognition. We introduce TABDet (Task-Agnostic Backdoor Detector), a pioneering task-agnostic method for backdoor detection. TABDet leverages final layer logits combined with an efficient pooling technique, enabling unified logit representation across three prominent NLP tasks. TABDet can jointly learn from diverse task-specific models, demonstrating superior detection efficacy over traditional task-specific methods.",
}
| Textual backdoor attacks pose significant security threats. Current detection approaches, typically relying on intermediate feature representation or reconstructing potential triggers, are task-specific and less effective beyond sentence classification, struggling with tasks like question answering and named entity recognition. We introduce TABDet (Task-Agnostic Backdoor Detector), a pioneering task-agnostic method for backdoor detection. TABDet leverages final layer logits combined with an efficient pooling technique, enabling unified logit representation across three prominent NLP tasks. TABDet can jointly learn from diverse task-specific models, demonstrating superior detection efficacy over traditional task-specific methods. | [
"Lyu, Weimin",
"Lin, Xiao",
"Zheng, Songzhu",
"Pang, Lu",
"Ling, Haibin",
"Jha, Susmit",
"Chen, Chao"
] | Task-Agnostic Detector for Insertion-Based Backdoor Attacks | findings-naacl.179 | Poster | 2403.17155 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.180.bib | https://aclanthology.org/2024.findings-naacl.180/ | @inproceedings{he-etal-2024-uncertainty,
title = "Uncertainty Estimation on Sequential Labeling via Uncertainty Transmission",
author = "He, Jianfeng and
Yu, Linlin and
Lei, Shuo and
Lu, Chang-Tien and
Chen, Feng",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.180",
doi = "10.18653/v1/2024.findings-naacl.180",
pages = "2823--2835",
abstract = "Sequential labeling is a task predicting labels for each token in a sequence, such as Named Entity Recognition (NER). NER tasks aim to extract entities and predict their labels given a text, which is important in information extraction. Although previous works have shown great progress in improving NER performance, uncertainty estimation on NER (UE-NER) is still underexplored but essential. This work focuses on UE-NER, which aims to estimate uncertainty scores for the NER predictions. Previous uncertainty estimation models often overlook two unique characteristics of NER: the connection between entities (i.e., one entity embedding is learned based on the other ones) and wrong span cases in the entity extraction subtask. Therefore, we propose a Sequential Labeling Posterior Network (SLPN) to estimate uncertainty scores for the extracted entities, considering uncertainty transmitted from other tokens. Moreover, we have defined an evaluation strategy to address the specificity of wrong-span cases. Our SLPN has achieved significant improvements on three datasets, such as a 5.54-point improvement in AUPR on the MIT-Restaurant dataset. Our code is available at \url{https://github.com/he159ok/UncSeqLabeling_SLPN}.",
}
| Sequential labeling is a task predicting labels for each token in a sequence, such as Named Entity Recognition (NER). NER tasks aim to extract entities and predict their labels given a text, which is important in information extraction. Although previous works have shown great progress in improving NER performance, uncertainty estimation on NER (UE-NER) is still underexplored but essential. This work focuses on UE-NER, which aims to estimate uncertainty scores for the NER predictions. Previous uncertainty estimation models often overlook two unique characteristics of NER: the connection between entities (i.e., one entity embedding is learned based on the other ones) and wrong span cases in the entity extraction subtask. Therefore, we propose a Sequential Labeling Posterior Network (SLPN) to estimate uncertainty scores for the extracted entities, considering uncertainty transmitted from other tokens. Moreover, we have defined an evaluation strategy to address the specificity of wrong-span cases. Our SLPN has achieved significant improvements on three datasets, such as a 5.54-point improvement in AUPR on the MIT-Restaurant dataset. Our code is available at \url{https://github.com/he159ok/UncSeqLabeling_SLPN}. | [
"He, Jianfeng",
"Yu, Linlin",
"Lei, Shuo",
"Lu, Chang-Tien",
"Chen, Feng"
] | Uncertainty Estimation on Sequential Labeling via Uncertainty Transmission | findings-naacl.180 | Poster | 2311.08726 | [
"https://github.com/he159ok/uncseqlabeling_slpn"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.181.bib | https://aclanthology.org/2024.findings-naacl.181/ | @inproceedings{lee-etal-2024-exploring,
title = "Exploring Language Model{'}s Code Generation Ability with Auxiliary Functions",
author = "Lee, Seonghyeon and
Jang, Sanghwan and
Jang, Seongbo and
Lee, Dongha and
Yu, Hwanjo",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.181",
doi = "10.18653/v1/2024.findings-naacl.181",
pages = "2836--2848",
abstract = "Auxiliary function is a helpful component to improve language model{'}s code generation ability. However, a systematic exploration of how they affect has yet to be done. In this work, we comprehensively evaluate the ability to utilize auxiliary functions encoded in recent code-pretrained language models. First, we construct a human-crafted evaluation set, called HumanExtension, which contains examples of two functions where one function assists the other.With HumanExtension, we design several experiments to examine their ability in a multifaceted way. Our evaluation processes enable a comprehensive understanding of including auxiliary functions in the prompt in terms of effectiveness and robustness. An additional implementation style analysis captures the models{'} various implementation patterns when they access the auxiliary function. Through this analysis, we discover the models{'} promising ability to utilize auxiliary functions including their self-improving behavior by implementing the two functions step-by-step. However, our analysis also reveals the model{'}s underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models. We release our code and dataset to facilitate this research direction.",
}
| Auxiliary function is a helpful component to improve language model{'}s code generation ability. However, a systematic exploration of how they affect has yet to be done. In this work, we comprehensively evaluate the ability to utilize auxiliary functions encoded in recent code-pretrained language models. First, we construct a human-crafted evaluation set, called HumanExtension, which contains examples of two functions where one function assists the other.With HumanExtension, we design several experiments to examine their ability in a multifaceted way. Our evaluation processes enable a comprehensive understanding of including auxiliary functions in the prompt in terms of effectiveness and robustness. An additional implementation style analysis captures the models{'} various implementation patterns when they access the auxiliary function. Through this analysis, we discover the models{'} promising ability to utilize auxiliary functions including their self-improving behavior by implementing the two functions step-by-step. However, our analysis also reveals the model{'}s underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models. We release our code and dataset to facilitate this research direction. | [
"Lee, Seonghyeon",
"Jang, Sanghwan",
"Jang, Seongbo",
"Lee, Dongha",
"Yu, Hwanjo"
] | Exploring Language Model's Code Generation Ability with Auxiliary Functions | findings-naacl.181 | Poster | 2403.10575 | [
"https://github.com/sh0416/humanextension"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.182.bib | https://aclanthology.org/2024.findings-naacl.182/ | @inproceedings{truong-etal-2024-crossing,
title = "Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of {V}ietnamese Large Language Models",
author = "Truong, Sang and
Nguyen, Duc and
Nguyen, Toan and
Le, Dong and
Truong, Nhi and
Quan, Tho and
Koyejo, Sanmi",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.182",
doi = "10.18653/v1/2024.findings-naacl.182",
pages = "2849--2900",
abstract = "Recent advancements in large language models (LLMs) have underscored their importance in the evolution of artificial intelligence. However, despite extensive pretraining on multilingual datasets, available open-sourced LLMs exhibit limited effectiveness in processing Vietnamese. The challenge is exacerbated by the absence of systematic benchmark datasets and metrics tailored for Vietnamese LLM evaluation. To mitigate these issues, we have finetuned LLMs specifically for Vietnamese and developed a comprehensive evaluation framework encompassing 10 tasks and 31 metrics. We observe that finetuning can help LLMs transfer knowledge across languages, serving as an efficient way to bolster their capabilities in non-English languages. Moreover, our analysis indicates that larger models can introduce more biases and uncalibrated outputs and the key factor influencing LLM performance is the quality of the training or finetuning datasets. These insights underscore the significance of meticulous finetuning with high-quality datasets in enhancing LLM performance.",
}
| Recent advancements in large language models (LLMs) have underscored their importance in the evolution of artificial intelligence. However, despite extensive pretraining on multilingual datasets, available open-sourced LLMs exhibit limited effectiveness in processing Vietnamese. The challenge is exacerbated by the absence of systematic benchmark datasets and metrics tailored for Vietnamese LLM evaluation. To mitigate these issues, we have finetuned LLMs specifically for Vietnamese and developed a comprehensive evaluation framework encompassing 10 tasks and 31 metrics. We observe that finetuning can help LLMs transfer knowledge across languages, serving as an efficient way to bolster their capabilities in non-English languages. Moreover, our analysis indicates that larger models can introduce more biases and uncalibrated outputs and the key factor influencing LLM performance is the quality of the training or finetuning datasets. These insights underscore the significance of meticulous finetuning with high-quality datasets in enhancing LLM performance. | [
"Truong, Sang",
"Nguyen, Duc",
"Nguyen, Toan",
"Le, Dong",
"Truong, Nhi",
"Quan, Tho",
"Koyejo, Sanmi"
] | Crossing Linguistic Horizons: Finetuning and Comprehensive Evaluation of Vietnamese Large Language Models | findings-naacl.182 | Poster | 2403.02715 | [
""
] | https://huggingface.co/papers/2403.02715 | 4 | 3 | 0 | 7 | 1 | [
"ura-hcmut/MixSUra",
"ura-hcmut/ura-llama-13b",
"ura-hcmut/GemSUra-2B",
"ura-hcmut/GemSUra-7B",
"ura-hcmut/ura-llama-70b",
"ura-hcmut/MixSUra-SFT",
"ura-hcmut/ura-llama-7b",
"ura-hcmut/MixSUra-AWQ",
"ura-hcmut/MixSUra-SFT-AWQ",
"ura-hcmut/ura-llama-2.1-8b",
"ura-hcmut/ura-llama-2-8b"
] | [] | [
"Omnibus/InferenceClient_Chatbots",
"K00B404/Teachershub",
"Nymbo/LangHub"
] |
https://aclanthology.org/2024.findings-naacl.183.bib | https://aclanthology.org/2024.findings-naacl.183/ | @inproceedings{yao-etal-2024-got,
title = "{G}o{T}: Effective Graph-of-Thought Reasoning in Language Models",
author = "Yao, Yao and
Li, Zuchao and
Zhao, Hai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.183",
doi = "10.18653/v1/2024.findings-naacl.183",
pages = "2901--2921",
abstract = "With the widespread use of language models (LMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. GoT adopts a two-stage framework with an additional GoT encoder for thought graph representation and fuses the graph representation with the original input representation through a gated fusion mechanism. We evaluate GoT{'}s performance on a text-only reasoning task (AQUA-RAT) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline on the AQUA-RAT test set and boosts accuracy from 85.19{\%} to 87.59{\%} using the T5-base model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Our code is publicly available at https://github.com/Zoeyyao27/Graph-of-Thought",
}
| With the widespread use of language models (LMs) in NLP tasks, researchers have discovered the potential of Chain-of-thought (CoT) to assist LMs in accomplishing complex reasoning tasks by generating intermediate steps. However, human thought processes are often non-linear, rather than simply sequential chains of thoughts. Therefore, we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. GoT adopts a two-stage framework with an additional GoT encoder for thought graph representation and fuses the graph representation with the original input representation through a gated fusion mechanism. We evaluate GoT{'}s performance on a text-only reasoning task (AQUA-RAT) and a multimodal reasoning task (ScienceQA). Our model achieves significant improvement over the strong CoT baseline on the AQUA-RAT test set and boosts accuracy from 85.19{\%} to 87.59{\%} using the T5-base model over the state-of-the-art Multimodal-CoT on the ScienceQA test set. Our code is publicly available at https://github.com/Zoeyyao27/Graph-of-Thought | [
"Yao, Yao",
"Li, Zuchao",
"Zhao, Hai"
] | GoT: Effective Graph-of-Thought Reasoning in Language Models | findings-naacl.183 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.184.bib | https://aclanthology.org/2024.findings-naacl.184/ | @inproceedings{zhou-etal-2024-enhancing,
title = "Enhancing the General Agent Capabilities of Low-Paramter {LLM}s through Tuning and Multi-Branch Reasoning",
author = "Zhou, Qinhao and
Zhang, Zihan and
Xiang, Xiang and
Wang, Ke and
Wu, Yuchuan and
Li, Yongbin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.184",
doi = "10.18653/v1/2024.findings-naacl.184",
pages = "2922--2931",
abstract = "Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities, making them highly successful in a variety of tasks. However, when used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4. As intelligent agents, LLMs need to have the capabilities of task planning, long-term memory, and the ability to leverage external tools to achieve satisfactory performance. Various methods have been proposed to enhance the agent capabilities of LLMs. On the one hand, methods involve constructing agent-specific data and fine-tuning the models. On the other hand, some methods focus on designing prompts that effectively activate the reasoning abilities of the LLMs. We explore both strategies on the 7B and 13B models. We propose a comprehensive method for constructing agent-specific data using GPT-4. Through supervised fine-tuning with constructed data, we find that for these models with a relatively small number of parameters, supervised fine-tuning can significantly reduce hallucination outputs and formatting errors in agent tasks. Furthermore, techniques such as multi-path reasoning and task decomposition can effectively decrease problem complexity and enhance the performance of LLMs as agents. We evaluate our method on five agent tasks of AgentBench and achieve satisfactory results.",
}
| Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities, making them highly successful in a variety of tasks. However, when used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4. As intelligent agents, LLMs need to have the capabilities of task planning, long-term memory, and the ability to leverage external tools to achieve satisfactory performance. Various methods have been proposed to enhance the agent capabilities of LLMs. On the one hand, methods involve constructing agent-specific data and fine-tuning the models. On the other hand, some methods focus on designing prompts that effectively activate the reasoning abilities of the LLMs. We explore both strategies on the 7B and 13B models. We propose a comprehensive method for constructing agent-specific data using GPT-4. Through supervised fine-tuning with constructed data, we find that for these models with a relatively small number of parameters, supervised fine-tuning can significantly reduce hallucination outputs and formatting errors in agent tasks. Furthermore, techniques such as multi-path reasoning and task decomposition can effectively decrease problem complexity and enhance the performance of LLMs as agents. We evaluate our method on five agent tasks of AgentBench and achieve satisfactory results. | [
"Zhou, Qinhao",
"Zhang, Zihan",
"Xiang, Xiang",
"Wang, Ke",
"Wu, Yuchuan",
"Li, Yongbin"
] | Enhancing the General Agent Capabilities of Low-Paramter LLMs through Tuning and Multi-Branch Reasoning | findings-naacl.184 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.185.bib | https://aclanthology.org/2024.findings-naacl.185/ | @inproceedings{you-etal-2024-mumath,
title = "{M}u{M}ath: Multi-perspective Data Augmentation for Mathematical Reasoning in Large Language Models",
author = "You, Weihao and
Yin, Shuo and
Zhao, Xudong and
Ji, Zhilong and
Zhong, Guoqiang and
Bai, Jinfeng",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.185",
doi = "10.18653/v1/2024.findings-naacl.185",
pages = "2932--2958",
abstract = "Recently, the tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs. However, these models fall short in demonstrating the calculation process, which compromises user-friendliness and understanding of problem-solving steps. Conversely, while tool-free methods offer a clear display of the problem-solving process, their accuracy leaves room for improvement.These tool-free methods typically employ a somewhat narrow range of augmentation techniques such as rephrasing and difficulty enhancement to boost performance. In response to this issue, we have amalgamated and further refined these strengths while broadening the scope of augmentation methods to construct a **mu**lti-perspective augmentation dataset for **math**ematics{---}termed **MuMath** ($\mu$-Math) Dataset.Subsequently, we finetune LLaMA-2 on the MuMath dataset to derive the MuMath model. Our experiments indicate that our MuMath-70B model achieves new state-of-the-art performance among tool-free methods{---}achieving 88.3{\%} on GSM8K and 34.5{\%} on MATH .We release the MuMath dataset along with its corresponding models and code for public use.",
}
| Recently, the tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs. However, these models fall short in demonstrating the calculation process, which compromises user-friendliness and understanding of problem-solving steps. Conversely, while tool-free methods offer a clear display of the problem-solving process, their accuracy leaves room for improvement.These tool-free methods typically employ a somewhat narrow range of augmentation techniques such as rephrasing and difficulty enhancement to boost performance. In response to this issue, we have amalgamated and further refined these strengths while broadening the scope of augmentation methods to construct a **mu**lti-perspective augmentation dataset for **math**ematics{---}termed **MuMath** ($\mu$-Math) Dataset.Subsequently, we finetune LLaMA-2 on the MuMath dataset to derive the MuMath model. Our experiments indicate that our MuMath-70B model achieves new state-of-the-art performance among tool-free methods{---}achieving 88.3{\%} on GSM8K and 34.5{\%} on MATH .We release the MuMath dataset along with its corresponding models and code for public use. | [
"You, Weihao",
"Yin, Shuo",
"Zhao, Xudong",
"Ji, Zhilong",
"Zhong, Guoqiang",
"Bai, Jinfeng"
] | MuMath: Multi-perspective Data Augmentation for Mathematical Reasoning in Large Language Models | findings-naacl.185 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.186.bib | https://aclanthology.org/2024.findings-naacl.186/ | @inproceedings{ye-etal-2024-tram,
title = "Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization",
author = "Ye, Tong and
Wu, Lingfei and
Ma, Tengfei and
Zhang, Xuhong and
Du, Yangkai and
Liu, Peiyu and
Ji, Shouling and
Wang, Wenhai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.186",
doi = "10.18653/v1/2024.findings-naacl.186",
pages = "2959--2971",
abstract = "Automatically generating human-readable text describing the functionality of a program is the intent of source code summarization. Although neural language models achieve significant performance in this field, they are limited by their inability to access external knowledge. To address this limitation, an emerging trend is combining neural models with external knowledge through retrieval methods. Previous methods have relied on the sentence-level retrieval paradigm on the encoder side. However, this paradigm is coarse-grained, noise-filled and cannot directly take advantage of the high-quality retrieved summary tokens on the decoder side. In this paper, we propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side rather than the encoder side to enhance the performance of neural models and produce more low-frequency tokens in generating summaries. Furthermore, to overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens. The results of extensive experiments and human evaluation show that our token-level retrieval-augmented approach significantly improves performance and is more interpretable.",
}
| Automatically generating human-readable text describing the functionality of a program is the intent of source code summarization. Although neural language models achieve significant performance in this field, they are limited by their inability to access external knowledge. To address this limitation, an emerging trend is combining neural models with external knowledge through retrieval methods. Previous methods have relied on the sentence-level retrieval paradigm on the encoder side. However, this paradigm is coarse-grained, noise-filled and cannot directly take advantage of the high-quality retrieved summary tokens on the decoder side. In this paper, we propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side rather than the encoder side to enhance the performance of neural models and produce more low-frequency tokens in generating summaries. Furthermore, to overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens. The results of extensive experiments and human evaluation show that our token-level retrieval-augmented approach significantly improves performance and is more interpretable. | [
"Ye, Tong",
"Wu, Lingfei",
"Ma, Tengfei",
"Zhang, Xuhong",
"Du, Yangkai",
"Liu, Peiyu",
"Ji, Shouling",
"Wang, Wenhai"
] | Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization | findings-naacl.186 | Poster | 2305.11074 | [
"https://github.com/tongye98/sourcecodesummary"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.187.bib | https://aclanthology.org/2024.findings-naacl.187/ | @inproceedings{li-etal-2024-uno,
title = "{UNO}-{DST}: Leveraging Unlabelled Data in Zero-Shot Dialogue State Tracking",
author = "Li, Chuang and
Zhang, Yan and
Kan, Min-Yen and
Li, Haizhou",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.187",
doi = "10.18653/v1/2024.findings-naacl.187",
pages = "2972--2983",
abstract = "Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, but ignore unlabelled data in the target domain.We transform zero-shot DST into few-shot DST by utilising such unlabelled data via joint and self-training methods. Our method incorporates auxiliary tasks that generate slot types as inverse prompts for main tasks, creating slot values during joint training. Cycle consistency between these two tasks enables the generation and selection of quality samples in unknown target domains for subsequent fine-tuning. This approach also facilitates automatic label creation, thereby optimizing the training and fine-tuning of DST models. We demonstrate this method{'}s effectiveness on general language models in zero-shot scenarios, improving average joint goal accuracy by 8{\%} across all domains in MultiWOZ.",
}
| Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, but ignore unlabelled data in the target domain.We transform zero-shot DST into few-shot DST by utilising such unlabelled data via joint and self-training methods. Our method incorporates auxiliary tasks that generate slot types as inverse prompts for main tasks, creating slot values during joint training. Cycle consistency between these two tasks enables the generation and selection of quality samples in unknown target domains for subsequent fine-tuning. This approach also facilitates automatic label creation, thereby optimizing the training and fine-tuning of DST models. We demonstrate this method{'}s effectiveness on general language models in zero-shot scenarios, improving average joint goal accuracy by 8{\%} across all domains in MultiWOZ. | [
"Li, Chuang",
"Zhang, Yan",
"Kan, Min-Yen",
"Li, Haizhou"
] | UNO-DST: Leveraging Unlabelled Data in Zero-Shot Dialogue State Tracking | findings-naacl.187 | Poster | 2310.10492 | [
"https://github.com/lichuangnus/uno-dst"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.188.bib | https://aclanthology.org/2024.findings-naacl.188/ | @inproceedings{zhang-etal-2024-evaluating,
title = "Evaluating Step-by-Step Reasoning through Symbolic Verification",
author = "Zhang, YiFan and
Zhang, Hanlin and
Li, Li and
Xing, Eric",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.188",
doi = "10.18653/v1/2024.findings-naacl.188",
pages = "2984--3002",
abstract = "Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations or chain-of-thoughts (CoT)) for in-context learning. On the other hand, these reasoning tasks are usually presumed to be more approachable for symbolic programming. To understand the mechanism of reasoning of LMs, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from non-parametric knowledge bases (KBs), supporting automated verification of intermediate reasoning results. Then we revisit neuro-symbolic approaches and propose to learn from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog{'}s backward chaining algorithm and supporting automated verification of LMs{'} outputs. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more than 25{\%} higher accuracy than CoT on length generalization benchmarks even with smaller model sizes.",
}
| Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations or chain-of-thoughts (CoT)) for in-context learning. On the other hand, these reasoning tasks are usually presumed to be more approachable for symbolic programming. To understand the mechanism of reasoning of LMs, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from non-parametric knowledge bases (KBs), supporting automated verification of intermediate reasoning results. Then we revisit neuro-symbolic approaches and propose to learn from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog{'}s backward chaining algorithm and supporting automated verification of LMs{'} outputs. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more than 25{\%} higher accuracy than CoT on length generalization benchmarks even with smaller model sizes. | [
"Zhang, YiFan",
"Zhang, Hanlin",
"Li, Li",
"Xing, Eric"
] | Evaluating Step-by-Step Reasoning through Symbolic Verification | findings-naacl.188 | Poster | 2212.08686 | [
"https://github.com/hlzhang109/lmlp"
] | https://huggingface.co/papers/2212.08686 | 1 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.189.bib | https://aclanthology.org/2024.findings-naacl.189/ | @inproceedings{slobodkin-etal-2024-multi,
title = "Multi-Review Fusion-in-Context",
author = "Slobodkin, Aviv and
Shapira, Ori and
Levy, Ran and
Dagan, Ido",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.189",
doi = "10.18653/v1/2024.findings-naacl.189",
pages = "3003--3021",
abstract = "Grounded text generation, encompassing tasks such as long-form question-answering and summarization, necessitates both content selection and content consolidation. Current end-to-end methods are difficult to control and interpret due to their opaqueness.Accordingly, recent works have proposed a modular approach, with separate components for each step. Specifically, we focus on the second subtask, of generating coherent text given pre-selected content in a multi-document setting. Concretely, we formalize Fusion-in-Context (FiC) as a standalone task, whose input consists of source texts with highlighted spans of targeted content. A model then needs to generate a coherent passage that includes all and only the target information.Our work includes the development of a curated dataset of 1000 instances in the reviews domain, alongside a novel evaluation framework for assessing the faithfulness and coverage of highlights, which strongly correlate to human judgment. Several baseline models exhibit promising outcomes and provide insightful analyses.This study lays the groundwork for further exploration of modular text generation in the multi-document setting, offering potential improvements in the quality and reliability of generated content. Our benchmark, FuseReviews, including the dataset, evaluation framework, and designated leaderboard, can be found at https://fusereviews.github.io/.",
}
| Grounded text generation, encompassing tasks such as long-form question-answering and summarization, necessitates both content selection and content consolidation. Current end-to-end methods are difficult to control and interpret due to their opaqueness.Accordingly, recent works have proposed a modular approach, with separate components for each step. Specifically, we focus on the second subtask, of generating coherent text given pre-selected content in a multi-document setting. Concretely, we formalize Fusion-in-Context (FiC) as a standalone task, whose input consists of source texts with highlighted spans of targeted content. A model then needs to generate a coherent passage that includes all and only the target information.Our work includes the development of a curated dataset of 1000 instances in the reviews domain, alongside a novel evaluation framework for assessing the faithfulness and coverage of highlights, which strongly correlate to human judgment. Several baseline models exhibit promising outcomes and provide insightful analyses.This study lays the groundwork for further exploration of modular text generation in the multi-document setting, offering potential improvements in the quality and reliability of generated content. Our benchmark, FuseReviews, including the dataset, evaluation framework, and designated leaderboard, can be found at https://fusereviews.github.io/. | [
"Slobodkin, Aviv",
"Shapira, Ori",
"Levy, Ran",
"Dagan, Ido"
] | Multi-Review Fusion-in-Context | findings-naacl.189 | Poster | 2403.15351 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.190.bib | https://aclanthology.org/2024.findings-naacl.190/ | @inproceedings{bouthors-etal-2024-retrieving,
title = "Retrieving Examples from Memory for Retrieval Augmented Neural Machine Translation: A Systematic Comparison",
author = "Bouthors, Maxime and
Crego, Josep and
Yvon, Fran{\c{c}}ois",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.190",
doi = "10.18653/v1/2024.findings-naacl.190",
pages = "3022--3039",
abstract = "Retrieval-Augmented Neural Machine Translation (RAMT) architectures retrieve examples from memory to guide the generation process. While most works in this trend explore new ways to exploit the retrieved examples, the upstream retrieval step is mostly unexplored. In this paper, we study the effect of varying retrieval methods for several translation architectures to better understand the interplay between these two processes.We conduct experiments in two language pairs in a multi-domain setting and consider several downstream architectures based on a standard autoregressive model, an edit-based model, and a large language model with in-context learning. Our experiments show that the choice of the retrieval technique impacts the translation scores, with variance across architectures. We also discuss the effects of increasing the number and diversity of examples, which are mostly positive across the board.",
}
| Retrieval-Augmented Neural Machine Translation (RAMT) architectures retrieve examples from memory to guide the generation process. While most works in this trend explore new ways to exploit the retrieved examples, the upstream retrieval step is mostly unexplored. In this paper, we study the effect of varying retrieval methods for several translation architectures to better understand the interplay between these two processes.We conduct experiments in two language pairs in a multi-domain setting and consider several downstream architectures based on a standard autoregressive model, an edit-based model, and a large language model with in-context learning. Our experiments show that the choice of the retrieval technique impacts the translation scores, with variance across architectures. We also discuss the effects of increasing the number and diversity of examples, which are mostly positive across the board. | [
"Bouthors, Maxime",
"Crego, Josep",
"Yvon, Fran{\\c{c}}ois"
] | Retrieving Examples from Memory for Retrieval Augmented Neural Machine Translation: A Systematic Comparison | findings-naacl.190 | Poster | 2404.02835 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.191.bib | https://aclanthology.org/2024.findings-naacl.191/ | @inproceedings{karypis-etal-2024-extending,
title = "Extending Input Contexts of Language Models through Training on Segmented Sequences",
author = "Karypis, Petros and
McAuley, Julian and
Karypis, George",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.191",
doi = "10.18653/v1/2024.findings-naacl.191",
pages = "3040--3052",
abstract = "Effectively training language models on longinputs poses many technical challenges. As acost consideration, languages models are pre-trained on a fixed sequence length before beingadapted to longer sequences. We explore var-ious methods for adapting models to longerinputs by training on segmented sequences andan interpolation-based method for extendingabsolute positional embeddings. We developa training procedure to extend the input con-text size of pretrained models with no architec-tural changes and no additional memory coststhan training on the original input lengths. Bysub-sampling segments from long inputs whilemaintaining their original position the model isable to learn new positional interactions. Ourmethod benefits both models trained with abso-lute positional embeddings, by extending theirinput contexts, as well as popular relative posi-tional embedding methods showing a reducedperplexity on sequences longer than they weretrained on. We demonstrate our method canextend input contexts by a factor of 4{\mbox{$\times$}} whileimproving perplexity.",
}
| Effectively training language models on longinputs poses many technical challenges. As acost consideration, languages models are pre-trained on a fixed sequence length before beingadapted to longer sequences. We explore var-ious methods for adapting models to longerinputs by training on segmented sequences andan interpolation-based method for extendingabsolute positional embeddings. We developa training procedure to extend the input con-text size of pretrained models with no architec-tural changes and no additional memory coststhan training on the original input lengths. Bysub-sampling segments from long inputs whilemaintaining their original position the model isable to learn new positional interactions. Ourmethod benefits both models trained with abso-lute positional embeddings, by extending theirinput contexts, as well as popular relative posi-tional embedding methods showing a reducedperplexity on sequences longer than they weretrained on. We demonstrate our method canextend input contexts by a factor of 4{\mbox{$\times$}} whileimproving perplexity. | [
"Karypis, Petros",
"McAuley, Julian",
"Karypis, George"
] | Extending Input Contexts of Language Models through Training on Segmented Sequences | findings-naacl.191 | Poster | 2310.14633 | [
""
] | https://huggingface.co/papers/2310.14633 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.192.bib | https://aclanthology.org/2024.findings-naacl.192/ | @inproceedings{li-etal-2024-reason,
title = "Reason from Fallacy: Enhancing Large Language Models{'} Logical Reasoning through Logical Fallacy Understanding",
author = "Li, Yanda and
Wang, Dixuan and
Liang, Jiaqing and
Jiang, Guochao and
He, Qianyu and
Xiao, Yanghua and
Yang, Deqing",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.192",
doi = "10.18653/v1/2024.findings-naacl.192",
pages = "3053--3066",
abstract = "Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning. One non-negligible reason for LLMs{'} suboptimal performance on logical reasoning is their overlooking of understanding logical fallacies correctly. To evaluate LLMs{'} capability of logical fallacy understanding (LFU), we propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we have successfully constructed a new dataset LFUD based on GPT-4 accompanied by a little human effort. Our extensive experiments justify that our LFUD can be used not only to evaluate LLMs{'} LFU capability, but also to fine-tune LLMs to obtain significantly enhanced performance on logical reasoning.",
}
| Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning. One non-negligible reason for LLMs{'} suboptimal performance on logical reasoning is their overlooking of understanding logical fallacies correctly. To evaluate LLMs{'} capability of logical fallacy understanding (LFU), we propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper. Towards these LFU tasks, we have successfully constructed a new dataset LFUD based on GPT-4 accompanied by a little human effort. Our extensive experiments justify that our LFUD can be used not only to evaluate LLMs{'} LFU capability, but also to fine-tune LLMs to obtain significantly enhanced performance on logical reasoning. | [
"Li, Y",
"a",
"Wang, Dixuan",
"Liang, Jiaqing",
"Jiang, Guochao",
"He, Qianyu",
"Xiao, Yanghua",
"Yang, Deqing"
] | Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding | findings-naacl.192 | Poster | 2404.04293 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.193.bib | https://aclanthology.org/2024.findings-naacl.193/ | @inproceedings{feng-etal-2024-exploring,
title = "Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models",
author = "Feng, Wanyong and
Lee, Jaewook and
McNichols, Hunter and
Scarlatos, Alexander and
Smith, Digory and
Woodhead, Simon and
Ornelas, Nancy and
Lan, Andrew",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.193",
doi = "10.18653/v1/2024.findings-naacl.193",
pages = "3067--3082",
abstract = "Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in assessments and practices. One of the most important aspects of MCQs is the distractors, i.e., incorrect options that are designed to target common errors or misconceptions among real students. To date, the task of crafting high-quality distractors largely remains a labor and time-intensive process for teachers and learning content designers, which has limited scalability. In this work, we study the task of automated distractor generation in the domain of math MCQs and explore a wide variety of large language model (LLM)-based approaches, from in-context learning to fine-tuning. We conduct extensive experiments using a real-world math MCQ dataset and find that although LLMs can generate some mathematically valid distractors, they are less adept at anticipating common errors or misconceptions among real students.",
}
| Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in assessments and practices. One of the most important aspects of MCQs is the distractors, i.e., incorrect options that are designed to target common errors or misconceptions among real students. To date, the task of crafting high-quality distractors largely remains a labor and time-intensive process for teachers and learning content designers, which has limited scalability. In this work, we study the task of automated distractor generation in the domain of math MCQs and explore a wide variety of large language model (LLM)-based approaches, from in-context learning to fine-tuning. We conduct extensive experiments using a real-world math MCQ dataset and find that although LLMs can generate some mathematically valid distractors, they are less adept at anticipating common errors or misconceptions among real students. | [
"Feng, Wanyong",
"Lee, Jaewook",
"McNichols, Hunter",
"Scarlatos, Alex",
"er",
"Smith, Digory",
"Woodhead, Simon",
"Ornelas, Nancy",
"Lan, Andrew"
] | Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models | findings-naacl.193 | Poster | 2404.02124 | [
"https://github.com/umass-ml4ed/prompt_distractor_generation_naacl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.194.bib | https://aclanthology.org/2024.findings-naacl.194/ | @inproceedings{tian-etal-2024-aspect,
title = "Aspect-based Sentiment Analysis with Context Denoising",
author = "Tian, Yuanhe and
Liu, Chang and
Song, Yan and
Xia, Fei and
Zhang, Yongdong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.194",
doi = "10.18653/v1/2024.findings-naacl.194",
pages = "3083--3095",
abstract = "Given a sentence and a particular aspect term, aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity towards this aspect term, which provides fine-grained analysis on sentiment understanding and it has attracted much attention in recent years. In order to achieve a good performance on ABSA, it is important for a model to appropriately encode contextual information, especially identifying salient features and eliminating noise in the context. To make incorrect predictions, most existing approaches employ powerful text encoders to locate important context features, as well as noises that mislead ABSA models. These approaches determine the noise in the text for ABSA by assigning low weights to context features or directly removing them from model input, which runs the risk of computing wrong weights or eliminating important context information. In this paper, we propose to improve ABSA with context denoising, where three types of word-level information are regarded as noise, namely, lexicographic noise, bag-of-words noise, and syntax noise. We utilize diffusion networks to perform the denoising process to gradually eliminate them so as to better predict sentiment polarities for given aspect terms. Our approach uses task-specific noise rather than the standard stochastic Gaussian noise in the diffusion networks. The experimental results on five widely used ABSA datasets demonstrate the validity and effectiveness of our approach.",
}
| Given a sentence and a particular aspect term, aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity towards this aspect term, which provides fine-grained analysis on sentiment understanding and it has attracted much attention in recent years. In order to achieve a good performance on ABSA, it is important for a model to appropriately encode contextual information, especially identifying salient features and eliminating noise in the context. To make incorrect predictions, most existing approaches employ powerful text encoders to locate important context features, as well as noises that mislead ABSA models. These approaches determine the noise in the text for ABSA by assigning low weights to context features or directly removing them from model input, which runs the risk of computing wrong weights or eliminating important context information. In this paper, we propose to improve ABSA with context denoising, where three types of word-level information are regarded as noise, namely, lexicographic noise, bag-of-words noise, and syntax noise. We utilize diffusion networks to perform the denoising process to gradually eliminate them so as to better predict sentiment polarities for given aspect terms. Our approach uses task-specific noise rather than the standard stochastic Gaussian noise in the diffusion networks. The experimental results on five widely used ABSA datasets demonstrate the validity and effectiveness of our approach. | [
"Tian, Yuanhe",
"Liu, Chang",
"Song, Yan",
"Xia, Fei",
"Zhang, Yongdong"
] | Aspect-based Sentiment Analysis with Context Denoising | findings-naacl.194 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.195.bib | https://aclanthology.org/2024.findings-naacl.195/ | @inproceedings{prasanna-arora-2024-irumozhi,
title = "{I}ru{M}ozhi: Automatically classifying diglossia in {T}amil",
author = "Prasanna, Kabilan and
Arora, Aryaman",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.195",
doi = "10.18653/v1/2024.findings-naacl.195",
pages = "3096--3103",
abstract = "Tamil, a Dravidian language of South Asia, is a highly diglossic language with two very different registers in everyday use: Literary Tamil (preferred in writing and formal communication) and Spoken Tamil (confined to speech and informal media). Spoken Tamil is under-studied in modern NLP systems compared to Literary Tamil written in the Tamil script, as evidenced by a lack of datasets explicitly targetting the Spoken variety. In this paper, we release IruMozhi, a human-translated dataset of parallel text in Literary and Spoken Tamil. Using IruMozhi, we train classifiers on the task of identifying which Tamil variety a text belongs to. We use these models to gauge the availability of pretraining data in Spoken Tamil, to audit the composition of existing labelled datasets for Tamil, and to encourage future work on the variety.",
}
| Tamil, a Dravidian language of South Asia, is a highly diglossic language with two very different registers in everyday use: Literary Tamil (preferred in writing and formal communication) and Spoken Tamil (confined to speech and informal media). Spoken Tamil is under-studied in modern NLP systems compared to Literary Tamil written in the Tamil script, as evidenced by a lack of datasets explicitly targetting the Spoken variety. In this paper, we release IruMozhi, a human-translated dataset of parallel text in Literary and Spoken Tamil. Using IruMozhi, we train classifiers on the task of identifying which Tamil variety a text belongs to. We use these models to gauge the availability of pretraining data in Spoken Tamil, to audit the composition of existing labelled datasets for Tamil, and to encourage future work on the variety. | [
"Prasanna, Kabilan",
"Arora, Aryaman"
] | IruMozhi: Automatically classifying diglossia in Tamil | findings-naacl.195 | Poster | 2311.07804 | [
""
] | https://huggingface.co/papers/2311.07804 | 2 | 0 | 0 | 2 | 1 | [] | [
"aryaman/irumozhi"
] | [] |
https://aclanthology.org/2024.findings-naacl.196.bib | https://aclanthology.org/2024.findings-naacl.196/ | @inproceedings{zhan-etal-2024-renovi,
title = "{RENOVI}: A Benchmark Towards Remediating Norm Violations in Socio-Cultural Conversations",
author = "Zhan, Haolan and
Li, Zhuang and
Kang, Xiaoxi and
Feng, Tao and
Hua, Yuncheng and
Qu, Lizhen and
Ying, Yi and
Chandra, Mei Rianto and
Rosalin, Kelly and
Jureynolds, Jureynolds and
Sharma, Suraj and
Qu, Shilin and
Luo, Linhao and
Zukerman, Ingrid and
Soon, Lay-Ki and
Semnani Azad, Zhaleh and
Haf, Reza",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.196",
doi = "10.18653/v1/2024.findings-naacl.196",
pages = "3104--3117",
abstract = "Norm violations occur when individuals fail to conform to culturally accepted behaviors, which may lead to potential conflicts. Remediating norm violations requires social awareness and cultural sensitivity of the nuances at play. To equip interactive AI systems with a remediation ability, we offer ReNoVi {---} a large-scale corpus of 9,258 multi-turn dialogues annotated with social norms, as well as define a sequence of tasks to help understand and remediate norm violations step by step. ReNoVi consists of two parts: 512 human-authored dialogues (real data), and 8,746 synthetic conversations generated by ChatGPT through prompt learning. While collecting sufficient human-authored data is costly, synthetic conversations provide suitable amounts of data to help mitigate the scarcity of training data, as well as the chance to assess the alignment between LLMs and humans in the awareness of social norms. We thus harness the power of ChatGPT to generate synthetic training data for our task. To ensure the quality of both human-authored and synthetic data, we follow a quality control protocol during data collection. Our experimental results demonstrate the importance of remediating norm violations in socio-cultural conversations, as well as the improvement in performance obtained from synthetic data.",
}
| Norm violations occur when individuals fail to conform to culturally accepted behaviors, which may lead to potential conflicts. Remediating norm violations requires social awareness and cultural sensitivity of the nuances at play. To equip interactive AI systems with a remediation ability, we offer ReNoVi {---} a large-scale corpus of 9,258 multi-turn dialogues annotated with social norms, as well as define a sequence of tasks to help understand and remediate norm violations step by step. ReNoVi consists of two parts: 512 human-authored dialogues (real data), and 8,746 synthetic conversations generated by ChatGPT through prompt learning. While collecting sufficient human-authored data is costly, synthetic conversations provide suitable amounts of data to help mitigate the scarcity of training data, as well as the chance to assess the alignment between LLMs and humans in the awareness of social norms. We thus harness the power of ChatGPT to generate synthetic training data for our task. To ensure the quality of both human-authored and synthetic data, we follow a quality control protocol during data collection. Our experimental results demonstrate the importance of remediating norm violations in socio-cultural conversations, as well as the improvement in performance obtained from synthetic data. | [
"Zhan, Haolan",
"Li, Zhuang",
"Kang, Xiaoxi",
"Feng, Tao",
"Hua, Yuncheng",
"Qu, Lizhen",
"Ying, Yi",
"Ch",
"ra, Mei Rianto",
"Rosalin, Kelly",
"Jureynolds, Jureynolds",
"Sharma, Suraj",
"Qu, Shilin",
"Luo, Linhao",
"Zukerman, Ingrid",
"Soon, Lay-Ki",
"Semnani Azad, Zhaleh",
"Haf, Reza"
] | RENOVI: A Benchmark Towards Remediating Norm Violations in Socio-Cultural Conversations | findings-naacl.196 | Poster | 2402.11178 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.197.bib | https://aclanthology.org/2024.findings-naacl.197/ | @inproceedings{kang-etal-2024-human,
title = "Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking",
author = "Kang, Hong Jin and
Harel-Canada, Fabrice and
Gulzar, Muhammad Ali and
Peng, Nanyun and
Kim, Miryung",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.197",
doi = "10.18653/v1/2024.findings-naacl.197",
pages = "3118--3129",
abstract = "Data augmentation techniques apply transformations to existing texts to generate additional data. The transformations may produce low-quality texts, where the meaning of the text is changed and the text may even be mangled beyond human comprehension. Analyzing the synthetically generated texts and their corresponding labels is slow and demanding. To winnow out texts with incorrect labels, we develop INSPECTOR, a human-in-the-loop data inspection technique. INSPECTOR combines the strengths of provenance tracking techniques with assistive labeling. INSPECTOR allows users to group related texts by their $\textit{transformation provenance}$, i.e., the transformations applied to the original text, or $\textit{feature provenance}$, the linguistic features of the original text. For assistive labeling, INSPECTOR computes metrics that approximate data quality, and allows users to compare the corresponding label of each text against the predictions of a large language model. In a user study, INSPECTOR increases the number of texts with correct labels identified by $3\times$ on a sentiment analysis task and by $4\times$ on a hate speech detection task. The participants found grouping the synthetically generated texts by their common transformation to be the most useful technique. Surprisingly, grouping texts by common linguistic features was perceived to be unhelpful. Contrary to prior work, our study finds that no single technique obviates the need for human inspection effort. This validates the design of INSPECTOR which combines both analysis of data provenance and assistive labeling to reduce human inspection effort.",
}
| Data augmentation techniques apply transformations to existing texts to generate additional data. The transformations may produce low-quality texts, where the meaning of the text is changed and the text may even be mangled beyond human comprehension. Analyzing the synthetically generated texts and their corresponding labels is slow and demanding. To winnow out texts with incorrect labels, we develop INSPECTOR, a human-in-the-loop data inspection technique. INSPECTOR combines the strengths of provenance tracking techniques with assistive labeling. INSPECTOR allows users to group related texts by their $\textit{transformation provenance}$, i.e., the transformations applied to the original text, or $\textit{feature provenance}$, the linguistic features of the original text. For assistive labeling, INSPECTOR computes metrics that approximate data quality, and allows users to compare the corresponding label of each text against the predictions of a large language model. In a user study, INSPECTOR increases the number of texts with correct labels identified by $3\times$ on a sentiment analysis task and by $4\times$ on a hate speech detection task. The participants found grouping the synthetically generated texts by their common transformation to be the most useful technique. Surprisingly, grouping texts by common linguistic features was perceived to be unhelpful. Contrary to prior work, our study finds that no single technique obviates the need for human inspection effort. This validates the design of INSPECTOR which combines both analysis of data provenance and assistive labeling to reduce human inspection effort. | [
"Kang, Hong Jin",
"Harel-Canada, Fabrice",
"Gulzar, Muhammad Ali",
"Peng, Nanyun",
"Kim, Miryung"
] | Human-in-the-Loop Synthetic Text Data Inspection with Provenance Tracking | findings-naacl.197 | Poster | 2404.18881 | [
"https://github.com/ucla-seal/provenanceinspector"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.198.bib | https://aclanthology.org/2024.findings-naacl.198/ | @inproceedings{lee-etal-2024-commit,
title = "{COMMIT}: Code-Mixing {E}nglish-Centric Large Language Model for Multilingual Instruction Tuning",
author = "Lee, Jaeseong and
Jung, YeonJoon and
Hwang, Seung-won",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.198",
doi = "10.18653/v1/2024.findings-naacl.198",
pages = "3130--3137",
abstract = "Recently, instruction-tuned large language models (LLMs) are showing prominent performance on various tasks, such as question answering. However, the majority of instruction-tuned LLMs are English-centric, which hinders their application to low-resource language QA. In this paper, we propose COde-Mixed Multilingual Instruction Tuning (COMMIT) to adapt English-centric LLM to low-resource language QA. We point out two main causes of English-centricness: imbalance of unlabeled data, and English-centric instruction tuning datasets. To deviate from English-centric instruction tuning, we propose to specialize code-mixing for instruction tuning, which blocks code-mixing in English templates, to leverage the potential of its superiority. To overcome data imbalance, we perform cross-lingual alignment. The majority of cross-lingual alignment works focused on making representations similar, which is not desirable to decoder-based LLMs, such as LLaMA. Therefore, we propose code-mixed continual causal language modeling to align the decoder. COMMIT improves the exact match score of low-resourced language QA by up to 32x. Code is publicly available.",
}
| Recently, instruction-tuned large language models (LLMs) are showing prominent performance on various tasks, such as question answering. However, the majority of instruction-tuned LLMs are English-centric, which hinders their application to low-resource language QA. In this paper, we propose COde-Mixed Multilingual Instruction Tuning (COMMIT) to adapt English-centric LLM to low-resource language QA. We point out two main causes of English-centricness: imbalance of unlabeled data, and English-centric instruction tuning datasets. To deviate from English-centric instruction tuning, we propose to specialize code-mixing for instruction tuning, which blocks code-mixing in English templates, to leverage the potential of its superiority. To overcome data imbalance, we perform cross-lingual alignment. The majority of cross-lingual alignment works focused on making representations similar, which is not desirable to decoder-based LLMs, such as LLaMA. Therefore, we propose code-mixed continual causal language modeling to align the decoder. COMMIT improves the exact match score of low-resourced language QA by up to 32x. Code is publicly available. | [
"Lee, Jaeseong",
"Jung, YeonJoon",
"Hwang, Seung-won"
] | COMMIT: Code-Mixing English-Centric Large Language Model for Multilingual Instruction Tuning | findings-naacl.198 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.199.bib | https://aclanthology.org/2024.findings-naacl.199/ | @inproceedings{maekawa-etal-2024-dilm,
title = "{D}i{LM}: Distilling Dataset into Language Model for Text-level Dataset Distillation",
author = "Maekawa, Aru and
Kosugi, Satoshi and
Funakoshi, Kotaro and
Okumura, Manabu",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.199",
doi = "10.18653/v1/2024.findings-naacl.199",
pages = "3138--3153",
abstract = "Dataset distillation aims to compress a training dataset by creating a small number of informative synthetic samples such that neural networks trained on them perform as well as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of a text to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that distilled synthetic datasets from DiLM outperform those from current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in-context learning of large language models. Our code will be available at https://github.com/arumaekawa/DiLM.",
}
| Dataset distillation aims to compress a training dataset by creating a small number of informative synthetic samples such that neural networks trained on them perform as well as those trained on the original training dataset. Current text dataset distillation methods create each synthetic sample as a sequence of word embeddings instead of a text to apply gradient-based optimization; however, such embedding-level distilled datasets cannot be used for training other models whose word embedding weights are different from the model used for distillation. To address this issue, we propose a novel text dataset distillation approach, called Distilling dataset into Language Model (DiLM), which trains a language model to generate informative synthetic training samples as text data, instead of directly optimizing synthetic samples. We evaluated DiLM on various text classification datasets and showed that distilled synthetic datasets from DiLM outperform those from current coreset selection methods. DiLM achieved remarkable generalization performance in training different types of models and in-context learning of large language models. Our code will be available at https://github.com/arumaekawa/DiLM. | [
"Maekawa, Aru",
"Kosugi, Satoshi",
"Funakoshi, Kotaro",
"Okumura, Manabu"
] | DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation | findings-naacl.199 | Poster | 2404.00264 | [
"https://github.com/arumaekawa/dilm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.200.bib | https://aclanthology.org/2024.findings-naacl.200/ | @inproceedings{gong-etal-2024-mindagent,
title = "{M}ind{A}gent: Emergent Gaming Interaction",
author = "Gong, Ran and
Huang, Qiuyuan and
Ma, Xiaojian and
Noda, Yusuke and
Durante, Zane and
Zheng, Zilong and
Terzopoulos, Demetri and
Fei-Fei, Li and
Gao, Jianfeng and
Vo, Hoi",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.200",
doi = "10.18653/v1/2024.findings-naacl.200",
pages = "3154--3183",
abstract = "Large Foundation Models (LFMs) can perform complex scheduling in a multi-agent system and can coordinate agents to complete sophisticated tasks that require extensive collaboration.However, despite the introduction of numerous gaming frameworks, the community lacks adequate benchmarks that support the implementation of a general multi-agent infrastructure encompassing collaboration between LFMs and human-NPCs. We propose a novel infrastructure{---}Mindagent{---}for evaluating planning and coordination capabilities in the context of gaming interaction. In particular, our infrastructure leverages an existing gaming framework to (i) act as the coordinator for a multi-agent system, (ii) collaborate with human players via instructions, and (iii) enable in-context learning based on few-shot prompting with feedback.Furthermore, we introduce {``}Cuisineworld{''}, a new gaming scenario and its related benchmark that supervises multiple agents playing the game simultaneously and measures multi-agent collaboration efficiency. We have conducted comprehensive evaluations with a new auto-metric Collaboration Score: CoS for assessing the collaboration efficiency. Finally, Mindagent can be deployed in real-world gaming scenarios in a customized VR version of Cuisineworld and adapted in the {``}Minecraft{''} domain. Our work involving LFMs within our new infrastructure for general-purpose scheduling and coordination can elucidate how such skills may be obtained by learning from large language corpora.",
}
| Large Foundation Models (LFMs) can perform complex scheduling in a multi-agent system and can coordinate agents to complete sophisticated tasks that require extensive collaboration.However, despite the introduction of numerous gaming frameworks, the community lacks adequate benchmarks that support the implementation of a general multi-agent infrastructure encompassing collaboration between LFMs and human-NPCs. We propose a novel infrastructure{---}Mindagent{---}for evaluating planning and coordination capabilities in the context of gaming interaction. In particular, our infrastructure leverages an existing gaming framework to (i) act as the coordinator for a multi-agent system, (ii) collaborate with human players via instructions, and (iii) enable in-context learning based on few-shot prompting with feedback.Furthermore, we introduce {``}Cuisineworld{''}, a new gaming scenario and its related benchmark that supervises multiple agents playing the game simultaneously and measures multi-agent collaboration efficiency. We have conducted comprehensive evaluations with a new auto-metric Collaboration Score: CoS for assessing the collaboration efficiency. Finally, Mindagent can be deployed in real-world gaming scenarios in a customized VR version of Cuisineworld and adapted in the {``}Minecraft{''} domain. Our work involving LFMs within our new infrastructure for general-purpose scheduling and coordination can elucidate how such skills may be obtained by learning from large language corpora. | [
"Gong, Ran",
"Huang, Qiuyuan",
"Ma, Xiaojian",
"Noda, Yusuke",
"Durante, Zane",
"Zheng, Zilong",
"Terzopoulos, Demetri",
"Fei-Fei, Li",
"Gao, Jianfeng",
"Vo, Hoi"
] | MindAgent: Emergent Gaming Interaction | findings-naacl.200 | Poster | 2309.09971 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.201.bib | https://aclanthology.org/2024.findings-naacl.201/ | @inproceedings{duan-etal-2024-botchat,
title = "{B}ot{C}hat: Evaluating {LLM}s{'} Capabilities of Having Multi-Turn Dialogues",
author = "Duan, Haodong and
Wei, Jueqi and
Wang, Chonghua and
Liu, Hongwei and
Fang, Yixiao and
Zhang, Songyang and
Lin, Dahua and
Chen, Kai",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.201",
doi = "10.18653/v1/2024.findings-naacl.201",
pages = "3184--3200",
abstract = "In the realm of modern Large Language Models (LLMs), facilitating high-quality, multi-turn dialogues with humans represents a cornerstone feature. However, human-based evaluation of such a capability involves substantial manual effort. This study offers a formative assessment of current LLMs{'} proficiency in emulating human-like, multi-turn conversations using an LLM-centric approach. The evaluation encompasses three key elements in the evaluation pipeline: utterance generation, evaluation protocol, and judgement, and we delve deeply into each aspect. GPT-4, both as an utterance generator and as a judge, exhibits exceptional performance. As a generator, GPT-4 crafts dialogues indistinguishable from human interactions in terms of style and flow. When judging, it shows a heightened alignment with human evaluative standards and consistency. Conversely, other LLMs face challenges in producing quality multi-turn dialogues, hindered by inadequate instruction-following abilities, a propensity for prolix utterances, and overall limited capabilities. Notably, generating extensive dialogues (e.g., spanning tens of turns) remains a formidable task for most LLMs, particularly in Chinese contexts. We hope that our work can serve as a valuable resource for evaluating the multi-turn chatting capabilities of LLMs. Related resources are available at https://github.com/open-compass/BotChat.",
}
| In the realm of modern Large Language Models (LLMs), facilitating high-quality, multi-turn dialogues with humans represents a cornerstone feature. However, human-based evaluation of such a capability involves substantial manual effort. This study offers a formative assessment of current LLMs{'} proficiency in emulating human-like, multi-turn conversations using an LLM-centric approach. The evaluation encompasses three key elements in the evaluation pipeline: utterance generation, evaluation protocol, and judgement, and we delve deeply into each aspect. GPT-4, both as an utterance generator and as a judge, exhibits exceptional performance. As a generator, GPT-4 crafts dialogues indistinguishable from human interactions in terms of style and flow. When judging, it shows a heightened alignment with human evaluative standards and consistency. Conversely, other LLMs face challenges in producing quality multi-turn dialogues, hindered by inadequate instruction-following abilities, a propensity for prolix utterances, and overall limited capabilities. Notably, generating extensive dialogues (e.g., spanning tens of turns) remains a formidable task for most LLMs, particularly in Chinese contexts. We hope that our work can serve as a valuable resource for evaluating the multi-turn chatting capabilities of LLMs. Related resources are available at https://github.com/open-compass/BotChat. | [
"Duan, Haodong",
"Wei, Jueqi",
"Wang, Chonghua",
"Liu, Hongwei",
"Fang, Yixiao",
"Zhang, Songyang",
"Lin, Dahua",
"Chen, Kai"
] | BotChat: Evaluating LLMs' Capabilities of Having Multi-Turn Dialogues | findings-naacl.201 | Poster | 2310.13650 | [
"https://github.com/open-compass/botchat"
] | https://huggingface.co/papers/2310.13650 | 2 | 0 | 0 | 8 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.202.bib | https://aclanthology.org/2024.findings-naacl.202/ | @inproceedings{wang-etal-2024-learning-mutually,
title = "Learning Mutually Informed Representations for Characters and Subwords",
author = "Wang, Yilin and
Hu, Xinyi and
Gormley, Matthew",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.202",
doi = "10.18653/v1/2024.findings-naacl.202",
pages = "3201--3213",
abstract = "Most pretrained language models rely on subword tokenization, which processes text as a sequence of subword tokens. However, different granularities of text, such as characters, subwords, and words, can contain different kinds of information. Previous studies have shown that incorporating multiple input granularities improves model generalization, yet very few of them outputs useful representations for each granularity. In this paper, we introduce the entanglement model, aiming to combine character and subword language models. Inspired by vision-language models, our model treats characters and subwords as separate modalities, and it generates mutually informed representations for both granularities as output. We evaluate our model on text classification, named entity recognition, POS-tagging, and character-level sequence labeling (intraword code-switching). Notably, the entanglement model outperforms its backbone language models, particularly in the presence of noisy texts and low-resource languages. Furthermore, the entanglement model even outperforms larger pre-trained models on all English sequence labeling tasks and classification tasks. We make our code publically available.",
}
| Most pretrained language models rely on subword tokenization, which processes text as a sequence of subword tokens. However, different granularities of text, such as characters, subwords, and words, can contain different kinds of information. Previous studies have shown that incorporating multiple input granularities improves model generalization, yet very few of them outputs useful representations for each granularity. In this paper, we introduce the entanglement model, aiming to combine character and subword language models. Inspired by vision-language models, our model treats characters and subwords as separate modalities, and it generates mutually informed representations for both granularities as output. We evaluate our model on text classification, named entity recognition, POS-tagging, and character-level sequence labeling (intraword code-switching). Notably, the entanglement model outperforms its backbone language models, particularly in the presence of noisy texts and low-resource languages. Furthermore, the entanglement model even outperforms larger pre-trained models on all English sequence labeling tasks and classification tasks. We make our code publically available. | [
"Wang, Yilin",
"Hu, Xinyi",
"Gormley, Matthew"
] | Learning Mutually Informed Representations for Characters and Subwords | findings-naacl.202 | Poster | 2311.07853 | [
"https://github.com/tonyw42/noisy-ie"
] | https://huggingface.co/papers/2311.07853 | 0 | 1 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.203.bib | https://aclanthology.org/2024.findings-naacl.203/ | @inproceedings{gao-etal-2024-novel,
title = "A Novel Two-step Fine-tuning Framework for Transfer Learning in Low-Resource Neural Machine Translation",
author = "Gao, Yuan and
Hou, Feng and
Wang, Ruili",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.203",
doi = "10.18653/v1/2024.findings-naacl.203",
pages = "3214--3224",
abstract = "Existing transfer learning methods for neural machine translation typically use a well-trained translation model (i.e., a parent model) of a high-resource language pair to directly initialize a translation model (i.e., a child model) of a low-resource language pair, and the child model is then fine-tuned with corresponding datasets. In this paper, we propose a novel two-step fine-tuning (TSFT) framework for transfer learning in low-resource neural machine translation. In the first step, we adjust the parameters of the parent model to fit the child language by using the child source data. In the second step, we transfer the adjusted parameters to the child model and fine-tune it with a proposed distillation loss for efficient optimization. Our experimental results on five low-resource translations demonstrate that our framework yields significant improvements over various strong transfer learning baselines. Further analysis demonstrated the effectiveness of different components in our framework.",
}
| Existing transfer learning methods for neural machine translation typically use a well-trained translation model (i.e., a parent model) of a high-resource language pair to directly initialize a translation model (i.e., a child model) of a low-resource language pair, and the child model is then fine-tuned with corresponding datasets. In this paper, we propose a novel two-step fine-tuning (TSFT) framework for transfer learning in low-resource neural machine translation. In the first step, we adjust the parameters of the parent model to fit the child language by using the child source data. In the second step, we transfer the adjusted parameters to the child model and fine-tune it with a proposed distillation loss for efficient optimization. Our experimental results on five low-resource translations demonstrate that our framework yields significant improvements over various strong transfer learning baselines. Further analysis demonstrated the effectiveness of different components in our framework. | [
"Gao, Yuan",
"Hou, Feng",
"Wang, Ruili"
] | A Novel Two-step Fine-tuning Framework for Transfer Learning in Low-Resource Neural Machine Translation | findings-naacl.203 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.204.bib | https://aclanthology.org/2024.findings-naacl.204/ | @inproceedings{miao-etal-2024-enhancing,
title = "Enhancing Cross-lingual Sentence Embedding for Low-resource Languages with Word Alignment",
author = "Miao, Zhongtao and
Wu, Qiyu and
Zhao, Kaiyan and
Wu, Zilong and
Tsuruoka, Yoshimasa",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.204",
doi = "10.18653/v1/2024.findings-naacl.204",
pages = "3225--3236",
abstract = "The field of cross-lingual sentence embeddings has recently experienced significant advancements, but research concerning low-resource languages has lagged due to the scarcity of parallel corpora. This paper shows that cross-lingual word representation in low-resource languages is notably under-aligned with that in high-resource languages in current models. To address this, we introduce a novel framework that explicitly aligns words between English and eight low-resource languages, utilizing off-the-shelf word alignment models. This framework incorporates three primary training objectives: aligned word prediction and word translation ranking, along with the widely used translation ranking. We evaluate our approach through experiments on the bitext retrieval task, which demonstrate substantial improvements on sentence embeddings in low-resource languages. In addition, the competitive performance of the proposed model across a broader range of tasks in high-resource languages underscores its practicality.",
}
| The field of cross-lingual sentence embeddings has recently experienced significant advancements, but research concerning low-resource languages has lagged due to the scarcity of parallel corpora. This paper shows that cross-lingual word representation in low-resource languages is notably under-aligned with that in high-resource languages in current models. To address this, we introduce a novel framework that explicitly aligns words between English and eight low-resource languages, utilizing off-the-shelf word alignment models. This framework incorporates three primary training objectives: aligned word prediction and word translation ranking, along with the widely used translation ranking. We evaluate our approach through experiments on the bitext retrieval task, which demonstrate substantial improvements on sentence embeddings in low-resource languages. In addition, the competitive performance of the proposed model across a broader range of tasks in high-resource languages underscores its practicality. | [
"Miao, Zhongtao",
"Wu, Qiyu",
"Zhao, Kaiyan",
"Wu, Zilong",
"Tsuruoka, Yoshimasa"
] | Enhancing Cross-lingual Sentence Embedding for Low-resource Languages with Word Alignment | findings-naacl.204 | Poster | 2404.02490 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.205.bib | https://aclanthology.org/2024.findings-naacl.205/ | @inproceedings{he-etal-2024-c3lpgcn,
title = "C$^{3}${LPGCN}:Integrating Contrastive Learning and Cooperative Learning with Prompt into Graph Convolutional Network for Aspect-based Sentiment Analysis",
author = "He, Ye and
Zou, Shihao and
YuzheChen, YuzheChen and
Huang, Xianying",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.205",
doi = "10.18653/v1/2024.findings-naacl.205",
pages = "3237--3247",
abstract = "Aspect-based Sentiment Analysis (ABSA) is a fine-grained task. Recently, using graph convolutional networks (GCNs) to model syntactic information has become a popular topic. In addition, a growing consensus exists to enhance sentence representation using contrastive learning. However, when modeling syntactic information, incorrect syntactic structure may introduce additional noise. Meanwhile, we believe that contrastive learning implicitly introduce label information as priori. Therefore, we propose C$^{3}$LPGCN, which integrates Contrastive Learning and Cooperative Learning with Prompt into GCN. Specifically, to alleviate the noise when modeling syntactic information, we propose mask-aware aspect information filter, which combines prompt information of template with aspect information to filter the syntactic information. Besides, we propose prompt-based contrastive learning and cooperative learning to utilise the label information further. On the one hand, we construct prompts containing labels for contrastive learning, by which the model can focus more on task-relevant features. On the other hand, cooperative learning further extracts label information by aligning input samples{'} representation and output distribution with label samples. Extensive experiments on three datasets demonstrate that our method significantly improves the model{'}s performance compared to traditional contrastive learning methods. Moreover, our C$^{3}$LPGCN outperforms state-of-the-art methods. Our source code and final models are publicly available at github",
}
| Aspect-based Sentiment Analysis (ABSA) is a fine-grained task. Recently, using graph convolutional networks (GCNs) to model syntactic information has become a popular topic. In addition, a growing consensus exists to enhance sentence representation using contrastive learning. However, when modeling syntactic information, incorrect syntactic structure may introduce additional noise. Meanwhile, we believe that contrastive learning implicitly introduce label information as priori. Therefore, we propose C$^{3}$LPGCN, which integrates Contrastive Learning and Cooperative Learning with Prompt into GCN. Specifically, to alleviate the noise when modeling syntactic information, we propose mask-aware aspect information filter, which combines prompt information of template with aspect information to filter the syntactic information. Besides, we propose prompt-based contrastive learning and cooperative learning to utilise the label information further. On the one hand, we construct prompts containing labels for contrastive learning, by which the model can focus more on task-relevant features. On the other hand, cooperative learning further extracts label information by aligning input samples{'} representation and output distribution with label samples. Extensive experiments on three datasets demonstrate that our method significantly improves the model{'}s performance compared to traditional contrastive learning methods. Moreover, our C$^{3}$LPGCN outperforms state-of-the-art methods. Our source code and final models are publicly available at github | [
"He, Ye",
"Zou, Shihao",
"YuzheChen, YuzheChen",
"Huang, Xianying"
] | C^3LPGCN:Integrating Contrastive Learning and Cooperative Learning with Prompt into Graph Convolutional Network for Aspect-based Sentiment Analysis | findings-naacl.205 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.206.bib | https://aclanthology.org/2024.findings-naacl.206/ | @inproceedings{yan-etal-2024-visual,
title = "Visual Enhanced Entity-Level Interaction Network for Multimodal Summarization",
author = "Yan, Haolong and
Tang, Binghao and
Lin, Boda and
Zhao, Gang and
Li, Si",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.206",
doi = "10.18653/v1/2024.findings-naacl.206",
pages = "3248--3260",
abstract = "MultiModal Summarization (MMS) aims to generate a concise summary based on multimodal data like texts and images and has wide application in multimodal fields.Previous works mainly focus on the coarse-level textual and visual features in which the overall features of the image interact with the whole sentence.However, the entities of the input text and the objects of the image may be underutilized, limiting the performance of current MMS models.In this paper, we propose a novel Visual Enhanced Entity-Level Interaction Network (VE-ELIN) to address the problem of underutilization of multimodal inputs at a fine-grained level in two ways.We first design a cross-modal entity interaction module to better fuse the entity information in text and the object information in vision.Then, we design an object-guided visual enhancement module to fully extract the visual features and enhance the focus of the image on the object area.We evaluate VE-ELIN on two MMS datasets and propose new metrics to measure the factual consistency of entities in the output.Finally, experimental results demonstrate that VE-ELIN is effective and outperforms previous methods under both traditional metrics and ours.The source code is available at https://github.com/summoneryhl/VE-ELIN.",
}
| MultiModal Summarization (MMS) aims to generate a concise summary based on multimodal data like texts and images and has wide application in multimodal fields.Previous works mainly focus on the coarse-level textual and visual features in which the overall features of the image interact with the whole sentence.However, the entities of the input text and the objects of the image may be underutilized, limiting the performance of current MMS models.In this paper, we propose a novel Visual Enhanced Entity-Level Interaction Network (VE-ELIN) to address the problem of underutilization of multimodal inputs at a fine-grained level in two ways.We first design a cross-modal entity interaction module to better fuse the entity information in text and the object information in vision.Then, we design an object-guided visual enhancement module to fully extract the visual features and enhance the focus of the image on the object area.We evaluate VE-ELIN on two MMS datasets and propose new metrics to measure the factual consistency of entities in the output.Finally, experimental results demonstrate that VE-ELIN is effective and outperforms previous methods under both traditional metrics and ours.The source code is available at https://github.com/summoneryhl/VE-ELIN. | [
"Yan, Haolong",
"Tang, Binghao",
"Lin, Boda",
"Zhao, Gang",
"Li, Si"
] | Visual Enhanced Entity-Level Interaction Network for Multimodal Summarization | findings-naacl.206 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.207.bib | https://aclanthology.org/2024.findings-naacl.207/ | @inproceedings{wang-etal-2024-knowledgeable,
title = "Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning",
author = "Wang, Jianing and
Wang, Chengyu and
Tan, Chuanqi and
Huang, Jun and
Gao, Ming",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.207",
doi = "10.18653/v1/2024.findings-naacl.207",
pages = "3261--3280",
abstract = "Large language models (LLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets: the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL:1) injecting knowledge into LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples for ICL with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge.We evaluate the proposed approaches on autoregressive models (e.g., GPT-style LLMs) over multiple text classification and question-answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines and improves by more than 13{\%} and 7{\%} on text classification and question-answering tasks, respectively.",
}
| Large language models (LLMs) enable in-context learning (ICL) by conditioning on a few labeled training examples as a text-based prompt, eliminating the need for parameter updates and achieving competitive performance. In this paper, we demonstrate that factual knowledge is imperative for the performance of ICL in three core facets: the inherent knowledge learned in LLMs, the factual knowledge derived from the selected in-context examples, and the knowledge biases in LLMs for output generation. To unleash the power of LLMs in few-shot learning scenarios, we introduce a novel Knowledgeable In-Context Tuning (KICT) framework to further improve the performance of ICL:1) injecting knowledge into LLMs during continual self-supervised pre-training, 2) judiciously selecting the examples for ICL with high knowledge relevance, and 3) calibrating the prediction results based on prior knowledge.We evaluate the proposed approaches on autoregressive models (e.g., GPT-style LLMs) over multiple text classification and question-answering tasks. Experimental results demonstrate that KICT substantially outperforms strong baselines and improves by more than 13{\%} and 7{\%} on text classification and question-answering tasks, respectively. | [
"Wang, Jianing",
"Wang, Chengyu",
"Tan, Chuanqi",
"Huang, Jun",
"Gao, Ming"
] | Knowledgeable In-Context Tuning: Exploring and Exploiting Factual Knowledge for In-Context Learning | findings-naacl.207 | Poster | 2309.14771 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.208.bib | https://aclanthology.org/2024.findings-naacl.208/ | @inproceedings{drinkall-etal-2024-time,
title = "Time Machine {GPT}",
author = "Drinkall, Felix and
Rahimikia, Eghbal and
Pierrehumbert, Janet and
Zohren, Stefan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.208",
doi = "10.18653/v1/2024.findings-naacl.208",
pages = "3281--3292",
abstract = "Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora, reflecting the lack of datasets with temporal metadata. This approach is not aligned with the evolving nature of language. Conventional methods for creating temporally adapted language models often depend on further pre-training static models on time-specific data. This paper presents a new approach: a series of point-in-time LLMs called TimeMachineGPT (TiMaGPT), specifically designed to be nonprognosticative. This ensures they remain uninformed about future factual information and linguistic changes. This strategy is beneficial for understanding language evolution and is of critical importance when applying models in dynamic contexts, such as time-series forecasting, where foresight of future information can prove problematic. We provide access to both the models and training datasets.",
}
| Large language models (LLMs) are often trained on extensive, temporally indiscriminate text corpora, reflecting the lack of datasets with temporal metadata. This approach is not aligned with the evolving nature of language. Conventional methods for creating temporally adapted language models often depend on further pre-training static models on time-specific data. This paper presents a new approach: a series of point-in-time LLMs called TimeMachineGPT (TiMaGPT), specifically designed to be nonprognosticative. This ensures they remain uninformed about future factual information and linguistic changes. This strategy is beneficial for understanding language evolution and is of critical importance when applying models in dynamic contexts, such as time-series forecasting, where foresight of future information can prove problematic. We provide access to both the models and training datasets. | [
"Drinkall, Felix",
"Rahimikia, Eghbal",
"Pierrehumbert, Janet",
"Zohren, Stefan"
] | Time Machine GPT | findings-naacl.208 | Poster | 2404.18543 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.209.bib | https://aclanthology.org/2024.findings-naacl.209/ | @inproceedings{kumari-etal-2024-end,
title = "An End-to-End Submodular Framework for Data-Efficient In-Context Learning",
author = "Kumari, Lilly and
Wang, Shengjie and
Das, Arnav and
Zhou, Tianyi and
Bilmes, Jeff",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.209",
doi = "10.18653/v1/2024.findings-naacl.209",
pages = "3293--3308",
abstract = "Recent advancements in natural language tasks leverage the emergent In-Context Learning (ICL) ability of pretrained Large Language Models (LLMs). ICL enables LLMs to perform new tasks by utilizing a limited number of input-output examples as prompts. While ICL circumvents the costly step of finetuning LLMs, its effectiveness is heavily dependent on the quality and ordering of provided examples (called exemplars). In this work, we propose a two-stage data-efficient framework $\textit{Div-S3}$ for exemplar selection for ICL. The first stage focuses on data annotation and employs a pool-based active learning approach to select a set of $\textit{Div}$erse and informative exemplars from the target tasks{'} unlabeled pool. Given a test input/query, the second stage uses Submodular Span Summarization ($\textit{S3}$) to select the most relevant and non-redundant exemplars from the annotated pool of a limited budget. On 7 different NLP datasets and 5 LLMs of varying complexities, we show $\textit{Div-S3}$ outperforms (1) existing active learning-based methods for data annotation for ICL and (2) similarity-based methods for test query-specific exemplars retrieval.",
}
| Recent advancements in natural language tasks leverage the emergent In-Context Learning (ICL) ability of pretrained Large Language Models (LLMs). ICL enables LLMs to perform new tasks by utilizing a limited number of input-output examples as prompts. While ICL circumvents the costly step of finetuning LLMs, its effectiveness is heavily dependent on the quality and ordering of provided examples (called exemplars). In this work, we propose a two-stage data-efficient framework $\textit{Div-S3}$ for exemplar selection for ICL. The first stage focuses on data annotation and employs a pool-based active learning approach to select a set of $\textit{Div}$erse and informative exemplars from the target tasks{'} unlabeled pool. Given a test input/query, the second stage uses Submodular Span Summarization ($\textit{S3}$) to select the most relevant and non-redundant exemplars from the annotated pool of a limited budget. On 7 different NLP datasets and 5 LLMs of varying complexities, we show $\textit{Div-S3}$ outperforms (1) existing active learning-based methods for data annotation for ICL and (2) similarity-based methods for test query-specific exemplars retrieval. | [
"Kumari, Lilly",
"Wang, Shengjie",
"Das, Arnav",
"Zhou, Tianyi",
"Bilmes, Jeff"
] | An End-to-End Submodular Framework for Data-Efficient In-Context Learning | findings-naacl.209 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.210.bib | https://aclanthology.org/2024.findings-naacl.210/ | @inproceedings{kuulmets-etal-2024-teaching,
title = "Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer",
author = "Kuulmets, Hele-Andra and
Purason, Taido and
Luhtaru, Agnes and
Fishel, Mark",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.210",
doi = "10.18653/v1/2024.findings-naacl.210",
pages = "3309--3325",
abstract = "This paper explores cost-efficient methods to adapt pretrained Large Language Models (LLMs) to new lower-resource languages, with a specific focus on Estonian. Leveraging the Llama 2 model, we investigate the impact of combining cross-lingual instruction-tuning with additional monolingual pretraining. Our results demonstrate that even a relatively small amount of additional monolingual pretraining followed by cross-lingual instruction-tuning significantly enhances results on Estonian. Furthermore, we showcase cross-lingual knowledge transfer from high-quality English instructions to Estonian, resulting in improvements in commonsense reasoning and multi-turn conversation capabilities. Our best model, named Llammas, represents the first open-source instruction-following LLM for Estonian. Additionally, we publish Alpaca-est, the first general task instruction dataset for Estonia. These contributions mark the initial progress in the direction of developing open-source LLMs for Estonian.",
}
| This paper explores cost-efficient methods to adapt pretrained Large Language Models (LLMs) to new lower-resource languages, with a specific focus on Estonian. Leveraging the Llama 2 model, we investigate the impact of combining cross-lingual instruction-tuning with additional monolingual pretraining. Our results demonstrate that even a relatively small amount of additional monolingual pretraining followed by cross-lingual instruction-tuning significantly enhances results on Estonian. Furthermore, we showcase cross-lingual knowledge transfer from high-quality English instructions to Estonian, resulting in improvements in commonsense reasoning and multi-turn conversation capabilities. Our best model, named Llammas, represents the first open-source instruction-following LLM for Estonian. Additionally, we publish Alpaca-est, the first general task instruction dataset for Estonia. These contributions mark the initial progress in the direction of developing open-source LLMs for Estonian. | [
"Kuulmets, Hele-Andra",
"Purason, Taido",
"Luhtaru, Agnes",
"Fishel, Mark"
] | Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer | findings-naacl.210 | Poster | 2404.04042 | [
"https://github.com/tartunlp/llammas"
] | https://huggingface.co/papers/2404.04042 | 2 | 0 | 0 | 4 | 1 | [
"tartuNLP/Llammas",
"tartuNLP/Llammas-base"
] | [] | [] |
https://aclanthology.org/2024.findings-naacl.211.bib | https://aclanthology.org/2024.findings-naacl.211/ | @inproceedings{chuang-etal-2024-simulating,
title = "Simulating Opinion Dynamics with Networks of {LLM}-based Agents",
author = "Chuang, Yun-Shiuan and
Goyal, Agam and
Harlalka, Nikunj and
Suresh, Siddharth and
Hawkins, Robert and
Yang, Sijia and
Shah, Dhavan and
Hu, Junjie and
Rogers, Timothy",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.211",
doi = "10.18653/v1/2024.findings-naacl.211",
pages = "3326--3346",
abstract = "Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs). Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility for understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs.",
}
| Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of Large Language Models (LLMs). Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility for understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs. | [
"Chuang, Yun-Shiuan",
"Goyal, Agam",
"Harlalka, Nikunj",
"Suresh, Siddharth",
"Hawkins, Robert",
"Yang, Sijia",
"Shah, Dhavan",
"Hu, Junjie",
"Rogers, Timothy"
] | Simulating Opinion Dynamics with Networks of LLM-based Agents | findings-naacl.211 | Poster | 2311.09618 | [
"https://github.com/yunshiuan/llm-agent-opinion-dynamics"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.212.bib | https://aclanthology.org/2024.findings-naacl.212/ | @inproceedings{katinskaia-yangarber-2024-probing,
title = "Probing the Category of Verbal Aspect in Transformer Language Models",
author = "Katinskaia, Anisia and
Yangarber, Roman",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.212",
doi = "10.18653/v1/2024.findings-naacl.212",
pages = "3347--3366",
abstract = "We investigate how pretrained language models (PLM) encode the grammatical category of verbal aspect in Russian. Encoding of aspect in transformer LMs has not been studied previously in any language. A particular challenge is posed by {''}alternative contexts{''}: where either the perfective or the imperfective aspect is suitable grammatically and semantically. We perform probing using BERT and RoBERTa on alternative and non-alternative contexts. First, we assess the models{'} performance on aspect prediction, via behavioral probing. Next, we examine the models{'} performance when their contextual representations are substituted with counterfactual representations, via causal probing. These counterfactuals alter the value of the {``}boundedness{''} feature{---}a semantic feature, which characterizes the action in the context. Experiments show that BERT and RoBERTa do encode aspect{---}mostly in their final layers. The counterfactual interventions affect perfective and imperfective in opposite ways, which is consistent with grammar: perfective is positively affected by adding the meaning of boundedness, and vice versa. The practical implications of our probing results are that fine-tuning only the last layers of BERT on predicting aspect is faster and more effective than fine-tuning the whole model. The model has high predictive uncertainty about aspect in alternative contexts, which tend to lack explicit hints about the boundedness of the described action.",
}
| We investigate how pretrained language models (PLM) encode the grammatical category of verbal aspect in Russian. Encoding of aspect in transformer LMs has not been studied previously in any language. A particular challenge is posed by {''}alternative contexts{''}: where either the perfective or the imperfective aspect is suitable grammatically and semantically. We perform probing using BERT and RoBERTa on alternative and non-alternative contexts. First, we assess the models{'} performance on aspect prediction, via behavioral probing. Next, we examine the models{'} performance when their contextual representations are substituted with counterfactual representations, via causal probing. These counterfactuals alter the value of the {``}boundedness{''} feature{---}a semantic feature, which characterizes the action in the context. Experiments show that BERT and RoBERTa do encode aspect{---}mostly in their final layers. The counterfactual interventions affect perfective and imperfective in opposite ways, which is consistent with grammar: perfective is positively affected by adding the meaning of boundedness, and vice versa. The practical implications of our probing results are that fine-tuning only the last layers of BERT on predicting aspect is faster and more effective than fine-tuning the whole model. The model has high predictive uncertainty about aspect in alternative contexts, which tend to lack explicit hints about the boundedness of the described action. | [
"Katinskaia, Anisia",
"Yangarber, Roman"
] | Probing the Category of Verbal Aspect in Transformer Language Models | findings-naacl.212 | Poster | 2406.02335 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.213.bib | https://aclanthology.org/2024.findings-naacl.213/ | @inproceedings{samardzic-etal-2024-measure,
title = "A Measure for Transparent Comparison of Linguistic Diversity in Multilingual {NLP} Data Sets",
author = "Samardzic, Tanja and
Gutierrez, Ximena and
Bentz, Christian and
Moran, Steven and
Pelloni, Olga",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.213",
doi = "10.18653/v1/2024.findings-naacl.213",
pages = "3367--3382",
abstract = "Typologically diverse benchmarks are increasingly created to track the progress achieved in multilingual NLP. Linguistic diversity of these data sets is typically measured as the number of languages or language families included in the sample, but such measures do not consider structural properties of the included languages. In this paper, we propose assessing linguistic diversity of a data set against a reference language sample as a means of maximising linguistic diversity in the long run. We represent languages as sets of features and apply a version of the Jaccard index suitable for comparing sets of measures. In addition to the features extracted from typological data bases, we propose an automatic text-based measure, which can be used as a means of overcoming the well-known problem of data sparsity in manually collected features. Our diversity score is interpretable in terms of linguistic features and can identify the types of languages that are not represented in a data set. Using our method, we analyse a range of popular multilingual data sets (UD, Bible100, mBERT, XTREME, XGLUE, XNLI, XCOPA, TyDiQA, XQuAD). In addition to ranking these data sets, we find, for example, that (poly)synthetic languages are missing in almost all of them.",
}
| Typologically diverse benchmarks are increasingly created to track the progress achieved in multilingual NLP. Linguistic diversity of these data sets is typically measured as the number of languages or language families included in the sample, but such measures do not consider structural properties of the included languages. In this paper, we propose assessing linguistic diversity of a data set against a reference language sample as a means of maximising linguistic diversity in the long run. We represent languages as sets of features and apply a version of the Jaccard index suitable for comparing sets of measures. In addition to the features extracted from typological data bases, we propose an automatic text-based measure, which can be used as a means of overcoming the well-known problem of data sparsity in manually collected features. Our diversity score is interpretable in terms of linguistic features and can identify the types of languages that are not represented in a data set. Using our method, we analyse a range of popular multilingual data sets (UD, Bible100, mBERT, XTREME, XGLUE, XNLI, XCOPA, TyDiQA, XQuAD). In addition to ranking these data sets, we find, for example, that (poly)synthetic languages are missing in almost all of them. | [
"Samardzic, Tanja",
"Gutierrez, Ximena",
"Bentz, Christian",
"Moran, Steven",
"Pelloni, Olga"
] | A Measure for Transparent Comparison of Linguistic Diversity in Multilingual NLP Data Sets | findings-naacl.213 | Poster | 2403.03909 | [
"https://github.com/morphdiv/jmm_diversity"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.214.bib | https://aclanthology.org/2024.findings-naacl.214/ | @inproceedings{chen-etal-2024-beyond,
title = "Beyond Read-Only: Crafting a Comprehensive {C}hinese Text-to-{SQL} Dataset for Database Manipulation and Query",
author = "Chen, Xi and
You, Jinguo and
Likun, Likun and
Li, Xiang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.214",
doi = "10.18653/v1/2024.findings-naacl.214",
pages = "3383--3393",
abstract = "Text-to-SQL aims to convert natural language into structured query language, which is a challenging task. Current research focuses mainly on read operations and ignores other aspects of database operations such as create, update, and delete operations. The benchmark datasets as well as models that have been proposed also fail to cover these operations, limiting the development and practical applications in the field. To bridge this gap, we propose CRUDSQL, a large-scale cross-domain single-table CRUD operations Chinese Text-to-SQL dataset. The dataset contains 10,000 question/SQL pairs involving 625 tables from different domains. To support further research on this dataset, we also propose a baseline method, CRUDParser, which employs a two-phase approach based on BERT and T5 for SQL generation and incorporates two strategies, value matching, and value prompting, for interacting with databases to further improve the performance. The experimental results show that the new operation types bring different challenges for future research, and our approach achieves 67.08{\%} and 83.8{\%} exact set matching accuracy under both read and delete operations in the test set, but only 49.6{\%} and 61.8{\%} under create and update operations. We believe that the proposal of CRUDSQL as well as CRUDParser can provide new directions and possibilities for research and practical applications in the field of Text-to-SQL. The dataset is published at https://github.com/bizard-lab/CRUDSQL.",
}
| Text-to-SQL aims to convert natural language into structured query language, which is a challenging task. Current research focuses mainly on read operations and ignores other aspects of database operations such as create, update, and delete operations. The benchmark datasets as well as models that have been proposed also fail to cover these operations, limiting the development and practical applications in the field. To bridge this gap, we propose CRUDSQL, a large-scale cross-domain single-table CRUD operations Chinese Text-to-SQL dataset. The dataset contains 10,000 question/SQL pairs involving 625 tables from different domains. To support further research on this dataset, we also propose a baseline method, CRUDParser, which employs a two-phase approach based on BERT and T5 for SQL generation and incorporates two strategies, value matching, and value prompting, for interacting with databases to further improve the performance. The experimental results show that the new operation types bring different challenges for future research, and our approach achieves 67.08{\%} and 83.8{\%} exact set matching accuracy under both read and delete operations in the test set, but only 49.6{\%} and 61.8{\%} under create and update operations. We believe that the proposal of CRUDSQL as well as CRUDParser can provide new directions and possibilities for research and practical applications in the field of Text-to-SQL. The dataset is published at https://github.com/bizard-lab/CRUDSQL. | [
"Chen, Xi",
"You, Jinguo",
"Likun, Likun",
"Li, Xiang"
] | Beyond Read-Only: Crafting a Comprehensive Chinese Text-to-SQL Dataset for Database Manipulation and Query | findings-naacl.214 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.215.bib | https://aclanthology.org/2024.findings-naacl.215/ | @inproceedings{rubino-etal-2024-normalizing,
title = "Normalizing without Modernizing: Keeping Historical Wordforms of {M}iddle {F}rench while Reducing Spelling Variants",
author = "Rubino, Raphael and
Gerlach, Johanna and
Mutal, Jonathan and
Bouillon, Pierrette",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.215",
doi = "10.18653/v1/2024.findings-naacl.215",
pages = "3394--3402",
abstract = "Conservation of historical documents benefits from computational methods by alleviating the manual labor related to digitization and modernization of textual content. Languages usually evolve over time and keeping historical wordforms is crucial for diachronic studies and digital humanities. However, spelling conventions did not necessarily exist when texts were originally written and orthographic variations are commonly observed depending on scribes and time periods. In this study, we propose to automatically normalize orthographic wordforms found in historical archives written in Middle French during the 16th century without fully modernizing textual content. We leverage pre-trained models in a low resource setting based on a manually curated parallel corpus and produce additional resources with artificial data generation approaches. Results show that causal language models and knowledge distillation improve over a strong baseline, thus validating the proposed methods.",
}
| Conservation of historical documents benefits from computational methods by alleviating the manual labor related to digitization and modernization of textual content. Languages usually evolve over time and keeping historical wordforms is crucial for diachronic studies and digital humanities. However, spelling conventions did not necessarily exist when texts were originally written and orthographic variations are commonly observed depending on scribes and time periods. In this study, we propose to automatically normalize orthographic wordforms found in historical archives written in Middle French during the 16th century without fully modernizing textual content. We leverage pre-trained models in a low resource setting based on a manually curated parallel corpus and produce additional resources with artificial data generation approaches. Results show that causal language models and knowledge distillation improve over a strong baseline, thus validating the proposed methods. | [
"Rubino, Raphael",
"Gerlach, Johanna",
"Mutal, Jonathan",
"Bouillon, Pierrette"
] | Normalizing without Modernizing: Keeping Historical Wordforms of Middle French while Reducing Spelling Variants | findings-naacl.215 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.216.bib | https://aclanthology.org/2024.findings-naacl.216/ | @inproceedings{sia-etal-2024-anti,
title = "Anti-{LM} Decoding for Zero-shot In-context Machine Translation",
author = "Sia, Suzanna and
DeLucia, Alexandra and
Duh, Kevin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.216",
doi = "10.18653/v1/2024.findings-naacl.216",
pages = "3403--3420",
abstract = "Zero-shot In-context learning is the phenomenon where models can perform a task given only the instructions. However, pre-trained large language models are known to be poorly calibrated for zero-shot tasks. One of the most effective approaches to handling this bias is to adopt a contrastive decoding objective, which accounts for the prior probability of generating the next token by conditioning on a context. This work introduces an Anti-Language Model objective with a decay factor designed to address the weaknesses of In-context Machine Translation. We conduct our experiments across 3 model types and sizes, 3 language directions, and for both greedy decoding and beam search. The proposed method outperforms other state-of-the-art decoding objectives, with up to 20 BLEU point improvement from the default objective in some settings.",
}
| Zero-shot In-context learning is the phenomenon where models can perform a task given only the instructions. However, pre-trained large language models are known to be poorly calibrated for zero-shot tasks. One of the most effective approaches to handling this bias is to adopt a contrastive decoding objective, which accounts for the prior probability of generating the next token by conditioning on a context. This work introduces an Anti-Language Model objective with a decay factor designed to address the weaknesses of In-context Machine Translation. We conduct our experiments across 3 model types and sizes, 3 language directions, and for both greedy decoding and beam search. The proposed method outperforms other state-of-the-art decoding objectives, with up to 20 BLEU point improvement from the default objective in some settings. | [
"Sia, Suzanna",
"DeLucia, Alex",
"ra",
"Duh, Kevin"
] | Anti-LM Decoding for Zero-shot In-context Machine Translation | findings-naacl.216 | Poster | 2311.08324 | [
"https://github.com/suzyahyah/icl_anti-lm_decoding"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.217.bib | https://aclanthology.org/2024.findings-naacl.217/ | @inproceedings{zhao-etal-2024-defending,
title = "Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning",
author = "Zhao, Shuai and
Gan, Leilei and
Luu, Anh Tuan and
Fu, Jie and
Lyu, Lingjuan and
Jia, Meihuizi and
Wen, Jinming",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.217",
doi = "10.18653/v1/2024.findings-naacl.217",
pages = "3421--3438",
abstract = "Recently, various parameter-efficient fine-tuning (PEFT) strategies for application to language models have been proposed and successfully implemented. However, this raises the question of whether PEFT, which only updates a limited set of model parameters, constitutes security vulnerabilities when confronted with weight-poisoning backdoor attacks. In this study, we show that PEFT is more susceptible to weight-poisoning backdoor attacks compared to the full-parameter fine-tuning method, with pre-defined triggers remaining exploitable and pre-defined targets maintaining high confidence, even after fine-tuning. Motivated by this insight, we developed a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence, providing robust defense against weight-poisoning backdoor attacks. Specifically, we leverage PEFT to train the PSIM with randomly reset sample labels. During the inference process, extreme confidence serves as an indicator for poisoned samples, while others are clean. We conduct experiments on text classification tasks, five fine-tuning strategies, and three weight-poisoning backdoor attack methods. Experiments show near 100{\%} success rates for weight-poisoning backdoor attacks when utilizing PEFT. Furthermore, our defensive approach exhibits overall competitive performance in mitigating weight-poisoning backdoor attacks.",
}
| Recently, various parameter-efficient fine-tuning (PEFT) strategies for application to language models have been proposed and successfully implemented. However, this raises the question of whether PEFT, which only updates a limited set of model parameters, constitutes security vulnerabilities when confronted with weight-poisoning backdoor attacks. In this study, we show that PEFT is more susceptible to weight-poisoning backdoor attacks compared to the full-parameter fine-tuning method, with pre-defined triggers remaining exploitable and pre-defined targets maintaining high confidence, even after fine-tuning. Motivated by this insight, we developed a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence, providing robust defense against weight-poisoning backdoor attacks. Specifically, we leverage PEFT to train the PSIM with randomly reset sample labels. During the inference process, extreme confidence serves as an indicator for poisoned samples, while others are clean. We conduct experiments on text classification tasks, five fine-tuning strategies, and three weight-poisoning backdoor attack methods. Experiments show near 100{\%} success rates for weight-poisoning backdoor attacks when utilizing PEFT. Furthermore, our defensive approach exhibits overall competitive performance in mitigating weight-poisoning backdoor attacks. | [
"Zhao, Shuai",
"Gan, Leilei",
"Luu, Anh Tuan",
"Fu, Jie",
"Lyu, Lingjuan",
"Jia, Meihuizi",
"Wen, Jinming"
] | Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning | findings-naacl.217 | Poster | 2402.12168 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.218.bib | https://aclanthology.org/2024.findings-naacl.218/ | @inproceedings{saxena-keller-2024-select,
title = "Select and Summarize: Scene Saliency for Movie Script Summarization",
author = "Saxena, Rohit and
Keller, Frank",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.218",
doi = "10.18653/v1/2024.findings-naacl.218",
pages = "3439--3455",
abstract = "Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding the overall narrative. The salience of a scene can be operationalized by considering it as salient if it is mentioned in the summary. Automatically identifying salient scenes is difficult due to the lack of suitable datasets. In this work, we introduce a scene saliency dataset that consists of human-annotated salient scenes for 100 movies. We propose a two-stage abstractive summarization approach which first identifies the salient scenes in script and then generates a summary using only those scenes. Using QA-based evaluation, we show that our model outperforms previous state-of-the-art summarization methods and reflects the information content of a movie more accurately than a model that takes the whole movie script as input.",
}
| Abstractive summarization for long-form narrative texts such as movie scripts is challenging due to the computational and memory constraints of current language models. A movie script typically comprises a large number of scenes; however, only a fraction of these scenes are salient, i.e., important for understanding the overall narrative. The salience of a scene can be operationalized by considering it as salient if it is mentioned in the summary. Automatically identifying salient scenes is difficult due to the lack of suitable datasets. In this work, we introduce a scene saliency dataset that consists of human-annotated salient scenes for 100 movies. We propose a two-stage abstractive summarization approach which first identifies the salient scenes in script and then generates a summary using only those scenes. Using QA-based evaluation, we show that our model outperforms previous state-of-the-art summarization methods and reflects the information content of a movie more accurately than a model that takes the whole movie script as input. | [
"Saxena, Rohit",
"Keller, Frank"
] | Select and Summarize: Scene Saliency for Movie Script Summarization | findings-naacl.218 | Poster | 2404.03561 | [
"https://github.com/saxenarohit/select_summ"
] | https://huggingface.co/papers/2404.03561 | 1 | 1 | 0 | 2 | 1 | [] | [
"rohitsaxena/MENSA"
] | [] |
https://aclanthology.org/2024.findings-naacl.219.bib | https://aclanthology.org/2024.findings-naacl.219/ | @inproceedings{yu-etal-2024-dont,
title = "Don{'}t be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks",
author = "Yu, Seunguk and
Choi, Juhwan and
Kim, YoungBin",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.219",
doi = "10.18653/v1/2024.findings-naacl.219",
pages = "3456--3467",
abstract = "Offensive language detection is an important task for filtering out abusive expressions and improving online user experiences. However, malicious users often attempt to avoid filtering systems through the involvement of textual noises. In this paper, we propose these evasions as user-intended adversarial attacks that insert special symbols or leverage the distinctive features of the Korean language. Furthermore, we introduce simple yet effective pooling strategies in a layer-wise manner to defend against the proposed attacks, focusing on the preceding layers not just the last layer to capture both offensiveness and token embeddings. We demonstrate that these pooling strategies are more robust to performance degradation even when the attack rate is increased, without directly training of such patterns. Notably, we found that models pre-trained on clean texts could achieve a comparable performance in detecting attacked offensive language, to models pre-trained on noisy texts by employing these pooling strategies.",
}
| Offensive language detection is an important task for filtering out abusive expressions and improving online user experiences. However, malicious users often attempt to avoid filtering systems through the involvement of textual noises. In this paper, we propose these evasions as user-intended adversarial attacks that insert special symbols or leverage the distinctive features of the Korean language. Furthermore, we introduce simple yet effective pooling strategies in a layer-wise manner to defend against the proposed attacks, focusing on the preceding layers not just the last layer to capture both offensiveness and token embeddings. We demonstrate that these pooling strategies are more robust to performance degradation even when the attack rate is increased, without directly training of such patterns. Notably, we found that models pre-trained on clean texts could achieve a comparable performance in detecting attacked offensive language, to models pre-trained on noisy texts by employing these pooling strategies. | [
"Yu, Seunguk",
"Choi, Juhwan",
"Kim, YoungBin"
] | Don't be a Fool: Pooling Strategies in Offensive Language Detection from User-Intended Adversarial Attacks | findings-naacl.219 | Poster | 2403.15467 | [
""
] | https://huggingface.co/papers/2403.15467 | 2 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.220.bib | https://aclanthology.org/2024.findings-naacl.220/ | @inproceedings{tran-etal-2024-z,
title = "{Z}-{GMOT}: Zero-shot Generic Multiple Object Tracking",
author = "Tran, Kim and
Le Dinh, Anh Duy and
Nguyen, Tien-Phat and
Phan, Thinh and
Nguyen, Pha and
Luu, Khoa and
Adjeroh, Donald and
Doretto, Gianfranco and
Le, Ngan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.220",
doi = "10.18653/v1/2024.findings-naacl.220",
pages = "3468--3479",
abstract = "Despite recent significant progress, Multi-Object Tracking (MOT) faces limitations such as reliance on prior knowledge and predefined categories and struggles with unseen objects. To address these issues, Generic Multiple Object Tracking (GMOT) has emerged as an alternative approach, requiring less prior information. However, current GMOT methods often rely on initial bounding boxes and struggle to handle variations in factors such as viewpoint, lighting, occlusion, and scale, among others. Our contributions commence with the introduction of the Referring GMOT dataset a collection of videos, each accompanied by detailed textual descriptions of their attributes. Subsequently, we propose Z-GMOT, a cutting-edge tracking solution capable of tracking objects from never-seen categories without the need of initial bounding boxes or predefined categories. Within our Z-GMOT framework, we introduce two novel components: (i) iGLIP, an improved Grounded language-image pretraining, for accurately detecting unseen objects with specific characteristics. (ii) MA-SORT, a novel object association approach that adeptly integrates motion and appearance-based matching strategies to tackle the complex task of tracking objects with high similarity. Our contributions are benchmarked through extensive experiments conducted on the Referring GMOT dataset for GMOT task. Additionally, to assess the generalizability of the proposed Z-GMOT, we conduct ablation studies on the DanceTrack and MOT20 datasets for the MOT task. Our dataset, code, and models are released at: https://fsoft-aic.github.io/Z-GMOT",
}
| Despite recent significant progress, Multi-Object Tracking (MOT) faces limitations such as reliance on prior knowledge and predefined categories and struggles with unseen objects. To address these issues, Generic Multiple Object Tracking (GMOT) has emerged as an alternative approach, requiring less prior information. However, current GMOT methods often rely on initial bounding boxes and struggle to handle variations in factors such as viewpoint, lighting, occlusion, and scale, among others. Our contributions commence with the introduction of the Referring GMOT dataset a collection of videos, each accompanied by detailed textual descriptions of their attributes. Subsequently, we propose Z-GMOT, a cutting-edge tracking solution capable of tracking objects from never-seen categories without the need of initial bounding boxes or predefined categories. Within our Z-GMOT framework, we introduce two novel components: (i) iGLIP, an improved Grounded language-image pretraining, for accurately detecting unseen objects with specific characteristics. (ii) MA-SORT, a novel object association approach that adeptly integrates motion and appearance-based matching strategies to tackle the complex task of tracking objects with high similarity. Our contributions are benchmarked through extensive experiments conducted on the Referring GMOT dataset for GMOT task. Additionally, to assess the generalizability of the proposed Z-GMOT, we conduct ablation studies on the DanceTrack and MOT20 datasets for the MOT task. Our dataset, code, and models are released at: https://fsoft-aic.github.io/Z-GMOT | [
"Tran, Kim",
"Le Dinh, Anh Duy",
"Nguyen, Tien-Phat",
"Phan, Thinh",
"Nguyen, Pha",
"Luu, Khoa",
"Adjeroh, Donald",
"Doretto, Gianfranco",
"Le, Ngan"
] | Z-GMOT: Zero-shot Generic Multiple Object Tracking | findings-naacl.220 | Poster | 2305.17648 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.221.bib | https://aclanthology.org/2024.findings-naacl.221/ | @inproceedings{bonaldi-etal-2024-nlp,
title = "{NLP} for Counterspeech against Hate: A Survey and How-To Guide",
author = "Bonaldi, Helena and
Chung, Yi-Ling and
Abercrombie, Gavin and
Guerini, Marco",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.221",
doi = "10.18653/v1/2024.findings-naacl.221",
pages = "3480--3499",
abstract = "In recent years, counterspeech has emerged as one of the most promising strategies to fight online hate. These non-escalatory responses tackle online abuse while preserving the freedom of speech of the users, and can have a tangible impact in reducing online and offline violence. Recently, there has been growing interest from the Natural Language Processing (NLP) community in addressing the challenges of analysing, collecting, classifying, and automatically generating counterspeech, to reduce the huge burden of manually producing it. In particular, researchers have taken different directions in addressing these challenges, thus providing a variety of related tasks and resources. In this paper, we provide a guide for doing research on counterspeech, by describing - with detailed examples - the steps to undertake, and providing best practices that can be learnt from the NLP studies on this topic. Finally, we discuss open challenges and future directions of counterspeech research in NLP.",
}
| In recent years, counterspeech has emerged as one of the most promising strategies to fight online hate. These non-escalatory responses tackle online abuse while preserving the freedom of speech of the users, and can have a tangible impact in reducing online and offline violence. Recently, there has been growing interest from the Natural Language Processing (NLP) community in addressing the challenges of analysing, collecting, classifying, and automatically generating counterspeech, to reduce the huge burden of manually producing it. In particular, researchers have taken different directions in addressing these challenges, thus providing a variety of related tasks and resources. In this paper, we provide a guide for doing research on counterspeech, by describing - with detailed examples - the steps to undertake, and providing best practices that can be learnt from the NLP studies on this topic. Finally, we discuss open challenges and future directions of counterspeech research in NLP. | [
"Bonaldi, Helena",
"Chung, Yi-Ling",
"Abercrombie, Gavin",
"Guerini, Marco"
] | NLP for Counterspeech against Hate: A Survey and How-To Guide | findings-naacl.221 | Poster | 2403.20103 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.222.bib | https://aclanthology.org/2024.findings-naacl.222/ | @inproceedings{occhipinti-etal-2024-prodigy,
title = "{PRODIG}y: a {PRO}file-based {DI}alogue Generation dataset",
author = "Occhipinti, Daniela and
Tekiroglu, Serra and
Guerini, Marco",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.222",
doi = "10.18653/v1/2024.findings-naacl.222",
pages = "3500--3514",
abstract = "Providing dialogue agents with a profile representation can improve their consistency and coherence, leading to better conversations. However, current profile-based dialogue datasets for training such agents contain either explicit profile representations that are simple and dialogue-specific, or implicit representations that are difficult to collect. In this work, we introduce the PRODIGy (PROfile-based DIalogue Generation) dataset, which brings diverse representations together, providing a more comprehensive profile dimension set for each speaker. This resource comprises more than 20k dialogues, sourced from movie scripts, aligned with speaker representations such as communication style, biography, personality and gender. Initial experiments with diverse baselines show that providing generative language models with these aspects of a profile, both separately and jointly, enhances models{'} performance. This improvement holds true in both in-domain and cross-domain settings, for both fine-tuned and instruction-based LLMs.",
}
| Providing dialogue agents with a profile representation can improve their consistency and coherence, leading to better conversations. However, current profile-based dialogue datasets for training such agents contain either explicit profile representations that are simple and dialogue-specific, or implicit representations that are difficult to collect. In this work, we introduce the PRODIGy (PROfile-based DIalogue Generation) dataset, which brings diverse representations together, providing a more comprehensive profile dimension set for each speaker. This resource comprises more than 20k dialogues, sourced from movie scripts, aligned with speaker representations such as communication style, biography, personality and gender. Initial experiments with diverse baselines show that providing generative language models with these aspects of a profile, both separately and jointly, enhances models{'} performance. This improvement holds true in both in-domain and cross-domain settings, for both fine-tuned and instruction-based LLMs. | [
"Occhipinti, Daniela",
"Tekiroglu, Serra",
"Guerini, Marco"
] | PRODIGy: a PROfile-based DIalogue Generation dataset | findings-naacl.222 | Poster | 2311.05195 | [
"https://github.com/land-fbk/prodigy-dataset"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.223.bib | https://aclanthology.org/2024.findings-naacl.223/ | @inproceedings{molenda-etal-2024-waterjudge,
title = "{W}ater{J}udge: Quality-Detection Trade-off when Watermarking Large Language Models",
author = "Molenda, Piotr and
Liusie, Adian and
Gales, Mark",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.223",
doi = "10.18653/v1/2024.findings-naacl.223",
pages = "3515--3525",
abstract = "Watermarking generative-AI systems, such as LLMs, has gained considerable interest, driven by their enhanced capabilities across a wide range of tasks. Although current approaches have demonstrated that small, context-dependent shifts in the word distributions can be used to apply and detect watermarks, there has been little work in analyzing the impact that these perturbations have on the quality of generated texts. Balancing high detectability with minimal performance degradation is crucial in terms of selecting the appropriate watermarking setting; therefore this paper proposes a simple analysis framework where comparative assessment, a flexible NLG evaluation framework, is used to assess the quality degradation caused by a particular watermark setting. We demonstrate that our framework provides easy visualization of the quality-detection trade-off of watermark settings, enabling a simple solution to find an LLM watermark operating point that provides a well-balanced performance. This approach is applied to two different summarization systems and a translation system, enabling cross-model analysis for a task, and cross-task analysis.",
}
| Watermarking generative-AI systems, such as LLMs, has gained considerable interest, driven by their enhanced capabilities across a wide range of tasks. Although current approaches have demonstrated that small, context-dependent shifts in the word distributions can be used to apply and detect watermarks, there has been little work in analyzing the impact that these perturbations have on the quality of generated texts. Balancing high detectability with minimal performance degradation is crucial in terms of selecting the appropriate watermarking setting; therefore this paper proposes a simple analysis framework where comparative assessment, a flexible NLG evaluation framework, is used to assess the quality degradation caused by a particular watermark setting. We demonstrate that our framework provides easy visualization of the quality-detection trade-off of watermark settings, enabling a simple solution to find an LLM watermark operating point that provides a well-balanced performance. This approach is applied to two different summarization systems and a translation system, enabling cross-model analysis for a task, and cross-task analysis. | [
"Molenda, Piotr",
"Liusie, Adian",
"Gales, Mark"
] | WaterJudge: Quality-Detection Trade-off when Watermarking Large Language Models | findings-naacl.223 | Poster | 2403.19548 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.224.bib | https://aclanthology.org/2024.findings-naacl.224/ | @inproceedings{xu-etal-2024-cognitive,
title = "Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking",
author = "Xu, Nan and
Wang, Fei and
Zhou, Ben and
Li, Bangzheng and
Xiao, Chaowei and
Chen, Muhao",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.224",
doi = "10.18653/v1/2024.findings-naacl.224",
pages = "3526--3548",
abstract = "While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their vulnerabilities. As representatives, jailbreak attacks can provoke harmful or unethical responses from LLMs, even after safety alignment. In this paper, we investigate a novel category of jailbreak attacks specifically designed to target the cognitive structure and processes of LLMs. Specifically, we analyze the safety vulnerability of LLMs in the face of 1) multilingual cognitive overload, 2) veiled expression, and 3) effect-to- cause reasoning. Different from previous jailbreak attacks, our proposed cognitive overload is a black-box attack with no need for knowledge of model architecture or access to model weights. Experiments conducted on AdvBench and MasterKey reveal that various LLMs, including both popular open-source model Llama 2 and the proprietary model ChatGPT, can be compromised through cognitive overload. Motivated by cognitive psychology work on managing cognitive load, we further investigate defending cognitive overload attack from two perspectives. Empirical studies show that our cognitive overload from three perspectives can jailbreak all studied LLMs successfully, while existing defense strategies can hardly mitigate the caused malicious uses effectively.",
}
| While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their vulnerabilities. As representatives, jailbreak attacks can provoke harmful or unethical responses from LLMs, even after safety alignment. In this paper, we investigate a novel category of jailbreak attacks specifically designed to target the cognitive structure and processes of LLMs. Specifically, we analyze the safety vulnerability of LLMs in the face of 1) multilingual cognitive overload, 2) veiled expression, and 3) effect-to- cause reasoning. Different from previous jailbreak attacks, our proposed cognitive overload is a black-box attack with no need for knowledge of model architecture or access to model weights. Experiments conducted on AdvBench and MasterKey reveal that various LLMs, including both popular open-source model Llama 2 and the proprietary model ChatGPT, can be compromised through cognitive overload. Motivated by cognitive psychology work on managing cognitive load, we further investigate defending cognitive overload attack from two perspectives. Empirical studies show that our cognitive overload from three perspectives can jailbreak all studied LLMs successfully, while existing defense strategies can hardly mitigate the caused malicious uses effectively. | [
"Xu, Nan",
"Wang, Fei",
"Zhou, Ben",
"Li, Bangzheng",
"Xiao, Chaowei",
"Chen, Muhao"
] | Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking | findings-naacl.224 | Poster | 2311.09827 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.225.bib | https://aclanthology.org/2024.findings-naacl.225/ | @inproceedings{ramos-etal-2024-paella,
title = "{PAELLA}: Parameter-Efficient Lightweight Language-Agnostic Captioning Model",
author = "Ramos, Rita and
Bugliarello, Emanuele and
Martins, Bruno and
Elliott, Desmond",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.225",
doi = "10.18653/v1/2024.findings-naacl.225",
pages = "3549--3564",
abstract = "We introduce PAELLA, a Parameter-Efficient Lightweight Language-Agnostic image captioning model designed to be both parameter and data-efficient using retrieval augmentation. The model is trained by learning a small mapping network with 34M parameters between a pre-trained visual model and a multilingual language model that is conditioned on two types of input: (i) the image itself, and (ii) a set of retrieved captions in the target language. The retrieved examples play a key role in guiding the model to generate captions across languages. Through retrieval, the model can be lightweight in terms of the number of trainable parameters, which only exist in its mapping network, and also in the amount of multilingual training data that is required. Experiments on the XM3600 dataset, featuring 36 languages, show that PAELLA can outperform or compete against some models with 3{--}77$\times$ more learned parameters and 35{--}863$\times$ more data, particularly in low-resource languages. We also find that PAELLA can be trained on only monolingual data and still show strong zero-shot abilities in other languages.",
}
| We introduce PAELLA, a Parameter-Efficient Lightweight Language-Agnostic image captioning model designed to be both parameter and data-efficient using retrieval augmentation. The model is trained by learning a small mapping network with 34M parameters between a pre-trained visual model and a multilingual language model that is conditioned on two types of input: (i) the image itself, and (ii) a set of retrieved captions in the target language. The retrieved examples play a key role in guiding the model to generate captions across languages. Through retrieval, the model can be lightweight in terms of the number of trainable parameters, which only exist in its mapping network, and also in the amount of multilingual training data that is required. Experiments on the XM3600 dataset, featuring 36 languages, show that PAELLA can outperform or compete against some models with 3{--}77$\times$ more learned parameters and 35{--}863$\times$ more data, particularly in low-resource languages. We also find that PAELLA can be trained on only monolingual data and still show strong zero-shot abilities in other languages. | [
"Ramos, Rita",
"Bugliarello, Emanuele",
"Martins, Bruno",
"Elliott, Desmond"
] | PAELLA: Parameter-Efficient Lightweight Language-Agnostic Captioning Model | findings-naacl.225 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.226.bib | https://aclanthology.org/2024.findings-naacl.226/ | @inproceedings{nguyen-etal-2024-oscar,
title = "{OSC}a{R}: Object State Captioning and State Change Representation",
author = "Nguyen, Nguyen and
Bi, Jing and
Vosoughi, Ali and
Tian, Yapeng and
Fazli, Pooyan and
Xu, Chenliang",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.226",
doi = "10.18653/v1/2024.findings-naacl.226",
pages = "3565--3576",
abstract = "The capability of intelligent models to extrapolate and comprehend changes in object states is a crucial yet demanding aspect of AI research, particularly through the lens of human interaction in real-world settings. This task involves describing complex visual environments, identifying active objects, and interpreting their changes as conveyed through language. Traditional methods, which isolate object captioning and state change detection, offer a limited view of dynamic environments. Moreover, relying on a small set of symbolic words to represent changes has restricted the expressiveness of language. To address these challenges, in this paper, we introduce the Object State Captioning and State Change Representation (OSCaR) dataset and benchmark. OSCaR consists of 14,084 annotated video segments with nearly 1,000 unique objects from various egocentric video collections. It sets a new testbed for evaluating Multimodal Large Language Models (MLLMs). Our experiments demonstrate that while MLLMs show some skill, they lack a full understanding of object state changes. The benchmark includes a fine-tuned model that, despite initial capabilities, requires significant improvements in accuracy and generalization ability for effective understanding of these changes. Our code and dataset are available at https://github.com/nguyennm1024/OSCaR.",
}
| The capability of intelligent models to extrapolate and comprehend changes in object states is a crucial yet demanding aspect of AI research, particularly through the lens of human interaction in real-world settings. This task involves describing complex visual environments, identifying active objects, and interpreting their changes as conveyed through language. Traditional methods, which isolate object captioning and state change detection, offer a limited view of dynamic environments. Moreover, relying on a small set of symbolic words to represent changes has restricted the expressiveness of language. To address these challenges, in this paper, we introduce the Object State Captioning and State Change Representation (OSCaR) dataset and benchmark. OSCaR consists of 14,084 annotated video segments with nearly 1,000 unique objects from various egocentric video collections. It sets a new testbed for evaluating Multimodal Large Language Models (MLLMs). Our experiments demonstrate that while MLLMs show some skill, they lack a full understanding of object state changes. The benchmark includes a fine-tuned model that, despite initial capabilities, requires significant improvements in accuracy and generalization ability for effective understanding of these changes. Our code and dataset are available at https://github.com/nguyennm1024/OSCaR. | [
"Nguyen, Nguyen",
"Bi, Jing",
"Vosoughi, Ali",
"Tian, Yapeng",
"Fazli, Pooyan",
"Xu, Chenliang"
] | OSCaR: Object State Captioning and State Change Representation | findings-naacl.226 | Poster | 2402.17128 | [
"https://github.com/nguyennm1024/oscar"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.227.bib | https://aclanthology.org/2024.findings-naacl.227/ | @inproceedings{thirukovalluru-etal-2024-sumcse,
title = "{S}um{CSE}: Summary as a transformation for Contrastive Learning",
author = "Thirukovalluru, Raghuveer and
Wang, Xiaolan and
Chen, Jun and
Li, Shuyang and
Lei, Jie and
Jin, Rong and
Dhingra, Bhuwan",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.227",
doi = "10.18653/v1/2024.findings-naacl.227",
pages = "3577--3588",
abstract = "Sentence embedding models are typically trained using contrastive learning (CL), either using human annotations directly or by repurposing other annotated datasets. In this work, we explore the recently introduced paradigm of generating CL data using generative language models (LM). In CL for computer vision (CV), compositional transformations (series of operations applied over an image. e.g. cropping + color distortion) which modify the input/image to retain minimal information were shown to be very effective. We show that composition of a {`}Summary{'} transformation with diverse paraphrasing/contradicting transformations accomplishes the same and works very well in CL for sentence embeddings. Our final generated dataset (using Vicuna-13B) significantly outperforms the previous best unsupervised method (using ChatGPT) by 1.8 points, and SimCSE, a strong supervised baseline by 0.3 points on the semantic text similarity (STS) benchmark.",
}
| Sentence embedding models are typically trained using contrastive learning (CL), either using human annotations directly or by repurposing other annotated datasets. In this work, we explore the recently introduced paradigm of generating CL data using generative language models (LM). In CL for computer vision (CV), compositional transformations (series of operations applied over an image. e.g. cropping + color distortion) which modify the input/image to retain minimal information were shown to be very effective. We show that composition of a {`}Summary{'} transformation with diverse paraphrasing/contradicting transformations accomplishes the same and works very well in CL for sentence embeddings. Our final generated dataset (using Vicuna-13B) significantly outperforms the previous best unsupervised method (using ChatGPT) by 1.8 points, and SimCSE, a strong supervised baseline by 0.3 points on the semantic text similarity (STS) benchmark. | [
"Thirukovalluru, Raghuveer",
"Wang, Xiaolan",
"Chen, Jun",
"Li, Shuyang",
"Lei, Jie",
"Jin, Rong",
"Dhingra, Bhuwan"
] | SumCSE: Summary as a transformation for Contrastive Learning | findings-naacl.227 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.228.bib | https://aclanthology.org/2024.findings-naacl.228/ | @inproceedings{guo-etal-2024-curious,
title = "The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text",
author = "Guo, Yanzhu and
Shang, Guokan and
Vazirgiannis, Michalis and
Clavel, Chlo{\'e}",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.228",
doi = "10.18653/v1/2024.findings-naacl.228",
pages = "3589--3604",
abstract = "This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models.",
}
| This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models. | [
"Guo, Yanzhu",
"Shang, Guokan",
"Vazirgiannis, Michalis",
"Clavel, Chlo{\\'e}"
] | The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text | findings-naacl.228 | Poster | 2311.09807 | [
""
] | https://huggingface.co/papers/2311.09807 | 0 | 1 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.229.bib | https://aclanthology.org/2024.findings-naacl.229/ | @inproceedings{jiang-etal-2024-personallm,
title = "{P}ersona{LLM}: Investigating the Ability of Large Language Models to Express Personality Traits",
author = "Jiang, Hang and
Zhang, Xiajie and
Cao, Xubo and
Breazeal, Cynthia and
Roy, Deb and
Kabbara, Jad",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.229",
doi = "10.18653/v1/2024.findings-naacl.229",
pages = "3605--3627",
abstract = "Despite the many use cases for large language models (LLMs) in creating personalized chatbots, there has been limited research on evaluating the extent to which the behaviors of personalized LLMs accurately and consistently reflect specific personality traits. We consider studying the behavior of LLM-based agents which we refer to as LLM personas and present a case study with GPT-3.5 and GPT-4 to investigate whether LLMs can generate content that aligns with their assigned personality profiles. To this end, we simulate distinct LLM personas based on the Big Five personality model, have them complete the 44-item Big Five Inventory (BFI) personality test and a story writing task, and then assess their essays with automatic and human evaluations. Results show that LLM personas{'} self-reported BFI scores are consistent with their designated personality types, with large effect sizes observed across five traits. Additionally, LLM personas{'} writings have emerging representative linguistic patterns for personality traits when compared with a human writing corpus. Furthermore, human evaluation shows that humans can perceive some personality traits with an accuracy of up to 80{\%}. Interestingly, the accuracy drops significantly when the annotators were informed of AI authorship.",
}
| Despite the many use cases for large language models (LLMs) in creating personalized chatbots, there has been limited research on evaluating the extent to which the behaviors of personalized LLMs accurately and consistently reflect specific personality traits. We consider studying the behavior of LLM-based agents which we refer to as LLM personas and present a case study with GPT-3.5 and GPT-4 to investigate whether LLMs can generate content that aligns with their assigned personality profiles. To this end, we simulate distinct LLM personas based on the Big Five personality model, have them complete the 44-item Big Five Inventory (BFI) personality test and a story writing task, and then assess their essays with automatic and human evaluations. Results show that LLM personas{'} self-reported BFI scores are consistent with their designated personality types, with large effect sizes observed across five traits. Additionally, LLM personas{'} writings have emerging representative linguistic patterns for personality traits when compared with a human writing corpus. Furthermore, human evaluation shows that humans can perceive some personality traits with an accuracy of up to 80{\%}. Interestingly, the accuracy drops significantly when the annotators were informed of AI authorship. | [
"Jiang, Hang",
"Zhang, Xiajie",
"Cao, Xubo",
"Breazeal, Cynthia",
"Roy, Deb",
"Kabbara, Jad"
] | PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits | findings-naacl.229 | Poster | 2305.02547 | [
"https://github.com/hjian42/personallm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.230.bib | https://aclanthology.org/2024.findings-naacl.230/ | @inproceedings{hamad-etal-2024-fire,
title = "{FIRE}: A Dataset for Financial Relation Extraction",
author = "Hamad, Hassan and
Thakur, Abhinav Kumar and
Kolleri, Nijil and
Pulikodan, Sujith and
Chugg, Keith",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.230",
doi = "10.18653/v1/2024.findings-naacl.230",
pages = "3628--3642",
abstract = "This paper introduces FIRE (**FI**nancial **R**elation **E**xtraction), a sentence-level dataset of named entities and relations within the financial sector. Comprising 3,025 instances, the dataset encapsulates 13 named entity types along with 18 relation types. Sourced from public financial reports and financial news articles, FIRE captures a wide array of financial information about a business including, but not limited to, corporate structure, business model, revenue streams, and market activities such as acquisitions. The full dataset was labeled by a single annotator to minimize labeling noise. The labeling time for each sentence was recorded during the labeling process. We show how this feature, along with curriculum learning techniques, can be used to improved a model{'}s performance. The FIRE dataset is designed to serve as a valuable resource for training and evaluating machine learning algorithms in the domain of financial information extraction. The dataset and the code to reproduce our experimental results are available at https://github.com/hmhamad/FIRE. The repository for the labeling tool can be found at https://github.com/abhinav-kumar-thakur/relation-extraction-annotator.",
}
| This paper introduces FIRE (**FI**nancial **R**elation **E**xtraction), a sentence-level dataset of named entities and relations within the financial sector. Comprising 3,025 instances, the dataset encapsulates 13 named entity types along with 18 relation types. Sourced from public financial reports and financial news articles, FIRE captures a wide array of financial information about a business including, but not limited to, corporate structure, business model, revenue streams, and market activities such as acquisitions. The full dataset was labeled by a single annotator to minimize labeling noise. The labeling time for each sentence was recorded during the labeling process. We show how this feature, along with curriculum learning techniques, can be used to improved a model{'}s performance. The FIRE dataset is designed to serve as a valuable resource for training and evaluating machine learning algorithms in the domain of financial information extraction. The dataset and the code to reproduce our experimental results are available at https://github.com/hmhamad/FIRE. The repository for the labeling tool can be found at https://github.com/abhinav-kumar-thakur/relation-extraction-annotator. | [
"Hamad, Hassan",
"Thakur, Abhinav Kumar",
"Kolleri, Nijil",
"Pulikodan, Sujith",
"Chugg, Keith"
] | FIRE: A Dataset for Financial Relation Extraction | findings-naacl.230 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.231.bib | https://aclanthology.org/2024.findings-naacl.231/ | @inproceedings{deng-etal-2024-musilingo,
title = "{M}usi{L}ingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response",
author = "Deng, Zihao and
Ma, Yinghao and
Liu, Yudong and
Guo, Rongchen and
Zhang, Ge and
Chen, Wenhu and
Huang, Wenhao and
Benetos, Emmanouil",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.231",
doi = "10.18653/v1/2024.findings-naacl.231",
pages = "3643--3655",
abstract = "Large Language Models (LLMs) have shown immense potential in multimodal applications, yet the convergence of textual and musical domains remains not well-explored. To address this gap, we present MusiLingo, a novel system for music caption generation and music-related query responses. MusiLingo employs a single projection layer to align music representations from the pre-trained frozen music audio model MERT (CITATION) with a frozen LLM, bridging the gap between music audio and textual contexts. We train it on an extensive music caption dataset and fine-tune it with instructional data. Due to the scarcity of high-quality music Q{\&}A datasets, we created the MusicInstruct (MI) dataset from captions in the MusicCaps datasets, tailored for open-ended music inquiries. Empirical evaluations demonstrate its competitive performance in generating music captions and composing music-related Q{\&}A pairs. Our introduced dataset enables notable advancements beyond previous ones.",
}
| Large Language Models (LLMs) have shown immense potential in multimodal applications, yet the convergence of textual and musical domains remains not well-explored. To address this gap, we present MusiLingo, a novel system for music caption generation and music-related query responses. MusiLingo employs a single projection layer to align music representations from the pre-trained frozen music audio model MERT (CITATION) with a frozen LLM, bridging the gap between music audio and textual contexts. We train it on an extensive music caption dataset and fine-tune it with instructional data. Due to the scarcity of high-quality music Q{\&}A datasets, we created the MusicInstruct (MI) dataset from captions in the MusicCaps datasets, tailored for open-ended music inquiries. Empirical evaluations demonstrate its competitive performance in generating music captions and composing music-related Q{\&}A pairs. Our introduced dataset enables notable advancements beyond previous ones. | [
"Deng, Zihao",
"Ma, Yinghao",
"Liu, Yudong",
"Guo, Rongchen",
"Zhang, Ge",
"Chen, Wenhu",
"Huang, Wenhao",
"Benetos, Emmanouil"
] | MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response | findings-naacl.231 | Poster | 2309.08730 | [
"https://github.com/zihaod/musilingo"
] | https://huggingface.co/papers/2309.08730 | 2 | 1 | 0 | 8 | 1 | [
"m-a-p/MusiLingo-long-v1",
"m-a-p/MusiLingo-short-v1",
"m-a-p/MusiLingo-musicqa-v1"
] | [
"m-a-p/Music-Instruct"
] | [] |
https://aclanthology.org/2024.findings-naacl.232.bib | https://aclanthology.org/2024.findings-naacl.232/ | @inproceedings{varshney-etal-2024-investigating,
title = "Investigating Acceleration of {LL}a{MA} Inference by Enabling Intermediate Layer Decoding via Instruction Tuning with {`}{LITE}{'}",
author = "Varshney, Neeraj and
Chatterjee, Agneet and
Parmar, Mihir and
Baral, Chitta",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.232",
doi = "10.18653/v1/2024.findings-naacl.232",
pages = "3656--3677",
abstract = "Large Language Models (LLMs) have achieved remarkable performance across a wide variety of tasks; however, their large size makes their inference slow and computationally expensive. Focusing on this problem, we study instruction tuning LLMs with additional explicit Losses from the Intermediate layers (LITE) and show that it enables these layers to acquire {`}good{'} generation ability without affecting the generation ability of the final layer. We then perform {`}dynamic confidence-based early exiting{'} at token level from the intermediate layers which improves the computational efficiency of text generation without sacrificing the quality of the generation. We conduct comprehensive experiments by instruction tuning LLaMA-2 models on the Alpaca dataset and evaluate on four different instruction test sets. We show that dynamic early exiting achieves consistent and considerable inference cost improvements (37.86{\%} for 7B and 46.35{\%} for 13B model) while maintaining the generation quality. We further conduct a thorough analysis of the results and dissect the efficiency improvements which reveals several important findings.",
}
| Large Language Models (LLMs) have achieved remarkable performance across a wide variety of tasks; however, their large size makes their inference slow and computationally expensive. Focusing on this problem, we study instruction tuning LLMs with additional explicit Losses from the Intermediate layers (LITE) and show that it enables these layers to acquire {`}good{'} generation ability without affecting the generation ability of the final layer. We then perform {`}dynamic confidence-based early exiting{'} at token level from the intermediate layers which improves the computational efficiency of text generation without sacrificing the quality of the generation. We conduct comprehensive experiments by instruction tuning LLaMA-2 models on the Alpaca dataset and evaluate on four different instruction test sets. We show that dynamic early exiting achieves consistent and considerable inference cost improvements (37.86{\%} for 7B and 46.35{\%} for 13B model) while maintaining the generation quality. We further conduct a thorough analysis of the results and dissect the efficiency improvements which reveals several important findings. | [
"Varshney, Neeraj",
"Chatterjee, Agneet",
"Parmar, Mihir",
"Baral, Chitta"
] | Investigating Acceleration of LLaMA Inference by Enabling Intermediate Layer Decoding via Instruction Tuning with `LITE' | findings-naacl.232 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.233.bib | https://aclanthology.org/2024.findings-naacl.233/ | @inproceedings{li-etal-2024-instruction,
title = "Instruction-following Evaluation through Verbalizer Manipulation",
author = "Li, Shiyang and
Yan, Jun and
Wang, Hai and
Tang, Zheng and
Ren, Xiang and
Srinivasan, Vijay and
Jin, Hongxia",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.233",
doi = "10.18653/v1/2024.findings-naacl.233",
pages = "3678--3692",
abstract = "While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting {``}positive{''} for positive sentiment), to minimally aligned (e.g., outputting {``}negative{''} for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model{'}s reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities.",
}
| While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting {``}positive{''} for positive sentiment), to minimally aligned (e.g., outputting {``}negative{''} for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model{'}s reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities. | [
"Li, Shiyang",
"Yan, Jun",
"Wang, Hai",
"Tang, Zheng",
"Ren, Xiang",
"Srinivasan, Vijay",
"Jin, Hongxia"
] | Instruction-following Evaluation through Verbalizer Manipulation | findings-naacl.233 | Poster | 2307.10558 | [
""
] | https://huggingface.co/papers/2307.10558 | 2 | 3 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.234.bib | https://aclanthology.org/2024.findings-naacl.234/ | @inproceedings{tao-etal-2024-webwise,
title = "{W}eb{WISE}: Unlocking Web Interface Control for {LLM}s via Sequential Exploration",
author = "Tao, Heyi and
T V, Sethuraman and
Shlapentokh-Rothman, Michal and
Gupta, Tanmay and
Ji, Heng and
Hoiem, Derek",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.234",
doi = "10.18653/v1/2024.findings-naacl.234",
pages = "3693--3711",
abstract = "This paper investigates using Large Language Models (LLMs) to automatically perform web software tasks using click, scroll, and text in- put operations. Previous approaches, such as reinforcement learning (RL) or imitation learning, are inefficient to train and task-specific. Our method uses filtered Document Object Model (DOM) elements as observations and performs tasks step-by-step, sequentially generating small programs based on the current observations. We use in-context learning, either benefiting from a single manually provided example, or an automatically generated example based on a successful zero-shot trial. We evaluate our proposed method on the MiniWob++ benchmark. With only one in-context example, our WebWISE method using gpt-3.5-turbo achieves similar or better performance than other methods that require many demonstrations or trials.",
}
| This paper investigates using Large Language Models (LLMs) to automatically perform web software tasks using click, scroll, and text in- put operations. Previous approaches, such as reinforcement learning (RL) or imitation learning, are inefficient to train and task-specific. Our method uses filtered Document Object Model (DOM) elements as observations and performs tasks step-by-step, sequentially generating small programs based on the current observations. We use in-context learning, either benefiting from a single manually provided example, or an automatically generated example based on a successful zero-shot trial. We evaluate our proposed method on the MiniWob++ benchmark. With only one in-context example, our WebWISE method using gpt-3.5-turbo achieves similar or better performance than other methods that require many demonstrations or trials. | [
"Tao, Heyi",
"T V, Sethuraman",
"Shlapentokh-Rothman, Michal",
"Gupta, Tanmay",
"Ji, Heng",
"Hoiem, Derek"
] | WebWISE: Unlocking Web Interface Control for LLMs via Sequential Exploration | findings-naacl.234 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.235.bib | https://aclanthology.org/2024.findings-naacl.235/ | @inproceedings{wang-etal-2024-codeclm,
title = "{C}odec{LM}: Aligning Language Models with Tailored Synthetic Data",
author = "Wang, Zifeng and
Li, Chun-Liang and
Perot, Vincent and
Le, Long and
Miao, Jin and
Zhang, Zizhao and
Lee, Chen-Yu and
Pfister, Tomas",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.235",
doi = "10.18653/v1/2024.findings-naacl.235",
pages = "3712--3729",
abstract = "Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users{'} actual goals. To reduce the labor and time cost to collect or annotate data by humans, researchers start to explore the use of LLMs to generate instruction-aligned synthetic data. Recent works focus on generating diverse instructions and applying LLM to increase instruction complexity, often neglecting downstream use cases. It remains unclear how to tailor high-quality data to elicit better instruction-following abilities in different target instruction distributions and LLMs. To this end, we introduce CodecLM, a general framework for adaptively generating high-quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Drawing on the Encode-Decode principles, we use LLMs as codecs to guide the data generation process. We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution, and then decode metadata to create tailored instructions. We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples. Extensive experiments on four open-domain instruction following benchmarks validate the effectiveness of CodecLM over the current state-of-the-arts.",
}
| Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users{'} actual goals. To reduce the labor and time cost to collect or annotate data by humans, researchers start to explore the use of LLMs to generate instruction-aligned synthetic data. Recent works focus on generating diverse instructions and applying LLM to increase instruction complexity, often neglecting downstream use cases. It remains unclear how to tailor high-quality data to elicit better instruction-following abilities in different target instruction distributions and LLMs. To this end, we introduce CodecLM, a general framework for adaptively generating high-quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Drawing on the Encode-Decode principles, we use LLMs as codecs to guide the data generation process. We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution, and then decode metadata to create tailored instructions. We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples. Extensive experiments on four open-domain instruction following benchmarks validate the effectiveness of CodecLM over the current state-of-the-arts. | [
"Wang, Zifeng",
"Li, Chun-Liang",
"Perot, Vincent",
"Le, Long",
"Miao, Jin",
"Zhang, Zizhao",
"Lee, Chen-Yu",
"Pfister, Tomas"
] | CodecLM: Aligning Language Models with Tailored Synthetic Data | findings-naacl.235 | Poster | 2404.05875 | [
""
] | https://huggingface.co/papers/2404.05875 | 3 | 16 | 0 | 8 | 1 | [] | [] | [] |
https://aclanthology.org/2024.findings-naacl.236.bib | https://aclanthology.org/2024.findings-naacl.236/ | @inproceedings{lin-etal-2024-prompting,
title = "Prompting Few-shot Multi-hop Question Generation via Comprehending Type-aware Semantics",
author = "Lin, Zefeng and
Chen, Weidong and
Song, Yan and
Zhang, Yongdong",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.236",
doi = "10.18653/v1/2024.findings-naacl.236",
pages = "3730--3740",
abstract = "Given several documents, multi-hop question generation (MQG) is a task aims to generate complicated questions that require reasoning over multiple pieces of these documents to find the answer. To perform this task, existing studies focus on designing advanced architectures to locate essential keywords or sentences in multiple documents and then generate questions accordingly, where they normally do not note that question types could provide crucial hints for extracting key information from the documents for MQG. In general, supervised approaches are used that rely on large annotated data, which is not available in many low-resource scenarios and thus makes MQG hard in these domains. Consider the recent success of large language models (LLMs) on natural language processing tasks using limited labeled data under few-shot settings, in this paper, we propose an approach named type-aware semantics extraction-based chain-of-thought method (TASE-CoT) for few-shot MQG. Specifically, our approach firstly extracts question types and essential semantic phrases from the given documents and the answer. Then, we design a three-step CoT template to leverage the extracted question type and semantic phrases to predict multi-hop questions. Extensive experiments and the results demonstrate the effectiveness of our approach and the proposed modules.",
}
| Given several documents, multi-hop question generation (MQG) is a task aims to generate complicated questions that require reasoning over multiple pieces of these documents to find the answer. To perform this task, existing studies focus on designing advanced architectures to locate essential keywords or sentences in multiple documents and then generate questions accordingly, where they normally do not note that question types could provide crucial hints for extracting key information from the documents for MQG. In general, supervised approaches are used that rely on large annotated data, which is not available in many low-resource scenarios and thus makes MQG hard in these domains. Consider the recent success of large language models (LLMs) on natural language processing tasks using limited labeled data under few-shot settings, in this paper, we propose an approach named type-aware semantics extraction-based chain-of-thought method (TASE-CoT) for few-shot MQG. Specifically, our approach firstly extracts question types and essential semantic phrases from the given documents and the answer. Then, we design a three-step CoT template to leverage the extracted question type and semantic phrases to predict multi-hop questions. Extensive experiments and the results demonstrate the effectiveness of our approach and the proposed modules. | [
"Lin, Zefeng",
"Chen, Weidong",
"Song, Yan",
"Zhang, Yongdong"
] | Prompting Few-shot Multi-hop Question Generation via Comprehending Type-aware Semantics | findings-naacl.236 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2024.findings-naacl.237.bib | https://aclanthology.org/2024.findings-naacl.237/ | @inproceedings{li-etal-2024-hindsight,
title = "When Hindsight is Not 20/20: Testing Limits on Reflective Thinking in Large Language Models",
author = "Li, Yanhong and
Yang, Chenghao and
Ettinger, Allyson",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.237",
doi = "10.18653/v1/2024.findings-naacl.237",
pages = "3741--3753",
abstract = "Recent studies suggest that self-reflective prompting can significantly enhance the reasoning capabilities of Large Language Models (LLMs). However, the use of external feedback as a stop criterion raises doubts about the true extent of LLMs{'} ability to emulate human-like self-reflection. In this paper, we set out to clarify these capabilities under a more stringent evaluation setting in which we disallow any kind of external feedback. Our findings under this setting show a split: while self-reflection enhances performance in TruthfulQA, it adversely affects results in HotpotQA.We conduct follow-up analyses to clarify the contributing factors in these patterns, and find that the influence of self-reflection is impacted both by reliability of accuracy in models{'} initial responses, and by overall question difficulty: specifically, self-reflection shows the most benefit when models are less likely to be correct initially, and when overall question difficulty is higher. We also find that self-reflection reduces tendency toward majority voting. Based on our findings, we propose guidelines for decisions on when to implement self-reflection. We release the codebase for reproducing our experiments at https://github.com/yanhong-lbh/LLM-SelfReflection-Eval.",
}
| Recent studies suggest that self-reflective prompting can significantly enhance the reasoning capabilities of Large Language Models (LLMs). However, the use of external feedback as a stop criterion raises doubts about the true extent of LLMs{'} ability to emulate human-like self-reflection. In this paper, we set out to clarify these capabilities under a more stringent evaluation setting in which we disallow any kind of external feedback. Our findings under this setting show a split: while self-reflection enhances performance in TruthfulQA, it adversely affects results in HotpotQA.We conduct follow-up analyses to clarify the contributing factors in these patterns, and find that the influence of self-reflection is impacted both by reliability of accuracy in models{'} initial responses, and by overall question difficulty: specifically, self-reflection shows the most benefit when models are less likely to be correct initially, and when overall question difficulty is higher. We also find that self-reflection reduces tendency toward majority voting. Based on our findings, we propose guidelines for decisions on when to implement self-reflection. We release the codebase for reproducing our experiments at https://github.com/yanhong-lbh/LLM-SelfReflection-Eval. | [
"Li, Yanhong",
"Yang, Chenghao",
"Ettinger, Allyson"
] | When Hindsight is Not 20/20: Testing Limits on Reflective Thinking in Large Language Models | findings-naacl.237 | Poster | 2404.09129 | [
"https://github.com/yanhong-lbh/llm-selfreflection-eval"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2024.findings-naacl.238.bib | https://aclanthology.org/2024.findings-naacl.238/ | @inproceedings{evuru-etal-2024-coda,
title = "{C}o{D}a: Constrained Generation based Data Augmentation for Low-Resource {NLP}",
author = "Evuru, Chandra Kiran and
Ghosh, Sreyan and
Kumar, Sonal and
S, Ramaneswaran and
Tyagi, Utkarsh and
Manocha, Dinesh",
editor = "Duh, Kevin and
Gomez, Helena and
Bethard, Steven",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-naacl.238",
doi = "10.18653/v1/2024.findings-naacl.238",
pages = "3754--3769",
abstract = "We present CoDa (**Co**nstrained Generation based **Da**ta Augmentation), a controllable, effective, and *training-free* data augmentation technique for low-resource (data-scarce) NLP. Our approach is based on prompting off-the-shelf instruction-following Large Language Models (LLMs) for generating text that satisfies a set of constraints. Precisely, we extract a set of simple constraints from every instance in the low-resource dataset and verbalize them to prompt an LLM to generate novel and diverse training instances. Our findings reveal that synthetic data that follows simple constraints in the downstream dataset act as highly effective augmentations, and CoDa can achieve this without intricate decoding-time constrained generation techniques or fine-tuning with complex algorithms that eventually make the model biased toward the small number of training instances. Additionally, CoDa is the first framework that provides users explicit control over the augmentation generation process, thereby also allowing easy adaptation to several domains. We demonstrate the effectiveness of CoDa across 11 datasets spanning 3 tasks and 3 low-resource settings. CoDa outperforms all our baselines, qualitatively and quantitatively, with improvements of 0.12{\%}-7.19{\%}. Code is available.",
}
| We present CoDa (**Co**nstrained Generation based **Da**ta Augmentation), a controllable, effective, and *training-free* data augmentation technique for low-resource (data-scarce) NLP. Our approach is based on prompting off-the-shelf instruction-following Large Language Models (LLMs) for generating text that satisfies a set of constraints. Precisely, we extract a set of simple constraints from every instance in the low-resource dataset and verbalize them to prompt an LLM to generate novel and diverse training instances. Our findings reveal that synthetic data that follows simple constraints in the downstream dataset act as highly effective augmentations, and CoDa can achieve this without intricate decoding-time constrained generation techniques or fine-tuning with complex algorithms that eventually make the model biased toward the small number of training instances. Additionally, CoDa is the first framework that provides users explicit control over the augmentation generation process, thereby also allowing easy adaptation to several domains. We demonstrate the effectiveness of CoDa across 11 datasets spanning 3 tasks and 3 low-resource settings. CoDa outperforms all our baselines, qualitatively and quantitatively, with improvements of 0.12{\%}-7.19{\%}. Code is available. | [
"Evuru, Ch",
"ra Kiran",
"Ghosh, Sreyan",
"Kumar, Sonal",
"S, Ramaneswaran",
"Tyagi, Utkarsh",
"Manocha, Dinesh"
] | CoDa: Constrained Generation based Data Augmentation for Low-Resource NLP | findings-naacl.238 | Poster | 2404.00415 | [
"https://github.com/sreyan88/coda"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
Subsets and Splits