Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.naacl-long.101.bib
https://aclanthology.org/2024.naacl-long.101/
@inproceedings{fu-etal-2024-tise, title = "{TISE}: A Tripartite In-context Selection Method for Event Argument Extraction", author = "Fu, Yanhe and Cao, Yanan and Wang, Qingyue and Liu, Yi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.101", doi = "10.18653/v1/2024.naacl-long.101", pages = "1801--1818", abstract = "In-context learning enhances the reasoning capabilities of LLMs by providing several examples. A direct yet effective approach to obtain in-context example is to select the top-k examples based on their semantic similarity to the test input. However, when applied to event argument extraction (EAE), this approach exhibits two shortcomings: 1) It may select almost identical examples, thus failing to provide additional event information, and 2) It overlooks event attributes, leading to the selected examples being unrelated to the test event type. In this paper, we introduce three necessary requirements when selecting an in-context example for EAE task: semantic similarity, example diversity and event correlation. And we further propose TISE, which scores examples from these three perspectives and integrates them using Determinantal Point Processes to directly select a set of examples as context. Experimental results on the ACE05 dataset demonstrate the effectiveness of TISE and the necessity of three requirements. Furthermore, we surprisingly observe that TISE can achieve superior performance with fewer examples and can even exceed some supervised methods.", }
In-context learning enhances the reasoning capabilities of LLMs by providing several examples. A direct yet effective approach to obtain in-context example is to select the top-k examples based on their semantic similarity to the test input. However, when applied to event argument extraction (EAE), this approach exhibits two shortcomings: 1) It may select almost identical examples, thus failing to provide additional event information, and 2) It overlooks event attributes, leading to the selected examples being unrelated to the test event type. In this paper, we introduce three necessary requirements when selecting an in-context example for EAE task: semantic similarity, example diversity and event correlation. And we further propose TISE, which scores examples from these three perspectives and integrates them using Determinantal Point Processes to directly select a set of examples as context. Experimental results on the ACE05 dataset demonstrate the effectiveness of TISE and the necessity of three requirements. Furthermore, we surprisingly observe that TISE can achieve superior performance with fewer examples and can even exceed some supervised methods.
[ "Fu, Yanhe", "Cao, Yanan", "Wang, Qingyue", "Liu, Yi" ]
TISE: A Tripartite In-context Selection Method for Event Argument Extraction
naacl-long.101
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.102.bib
https://aclanthology.org/2024.naacl-long.102/
@inproceedings{wu-etal-2024-reasoning, title = "Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks", author = {Wu, Zhaofeng and Qiu, Linlu and Ross, Alexis and Aky{\"u}rek, Ekin and Chen, Boyuan and Wang, Bailin and Kim, Najoung and Andreas, Jacob and Kim, Yoon}, editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.102", doi = "10.18653/v1/2024.naacl-long.102", pages = "1819--1862", abstract = "The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on {``}counterfactual{''} task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to an extent, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects.", }
The impressive performance of recent language models across a wide range of tasks suggests that they possess a degree of abstract reasoning skills. Are these skills general and transferable, or specialized to specific tasks seen during pretraining? To disentangle these effects, we propose an evaluation framework based on {``}counterfactual{''} task variants that deviate from the default assumptions underlying standard tasks. Across a suite of 11 tasks, we observe nontrivial performance on the counterfactual variants, but nevertheless find that performance substantially and consistently degrades compared to the default conditions. This suggests that while current LMs may possess abstract task-solving skills to an extent, they often also rely on narrow, non-transferable procedures for task-solving. These results motivate a more careful interpretation of language model performance that teases apart these aspects.
[ "Wu, Zhaofeng", "Qiu, Linlu", "Ross, Alexis", "Aky{\\\"u}rek, Ekin", "Chen, Boyuan", "Wang, Bailin", "Kim, Najoung", "Andreas, Jacob", "Kim, Yoon" ]
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
naacl-long.102
Oral
2307.02477
[ "https://github.com/zhaofengwu/counterfactual-evaluation" ]
https://huggingface.co/papers/2307.02477
3
0
0
9
1
[]
[ "ZhaofengWu/FOLIO-counterfactual" ]
[]
https://aclanthology.org/2024.naacl-long.103.bib
https://aclanthology.org/2024.naacl-long.103/
@inproceedings{wang-etal-2024-true, title = "{TRUE}-{UIE}: Two Universal Relations Unify Information Extraction Tasks", author = "Wang, Yucheng and Yu, Bowen and Liu, Yilin and Lu, Shudong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.103", doi = "10.18653/v1/2024.naacl-long.103", pages = "1863--1876", abstract = "Information extraction (IE) encounters challenges due to the variety of schemas and objectives that differ across tasks. Recent advancements hint at the potential for universal approaches to model such tasks, referred to as Universal Information Extraction (UIE). While handling diverse tasks in one model, their generalization is limited since they are actually learning task-specific knowledge.In this study, we introduce an innovative paradigm known as TRUE-UIE, wherein all IE tasks are aligned to learn the same goals: extracting mention spans and two universal relations named $\mathtt{NEXT}$ and $\mathtt{IS}$. During the decoding process, the $\mathtt{NEXT}$ relation is utilized to group related elements, while the $\mathtt{IS}$ relation, in conjunction with structured language prompts, undertakes the role of type recognition. Additionally, we consider the sequential dependency of tokens during span extraction, an aspect often overlooked in prevalent models.Our empirical experiments indicate that TRUE-UIE achieves state-of-the-art performance on established benchmarks encompassing 16 datasets, spanning 7 diverse IE tasks. Further evaluations reveal that our approach effectively share knowledge between different IE tasks, showcasing significant transferability in zero-shot and few-shot scenarios.", }
Information extraction (IE) encounters challenges due to the variety of schemas and objectives that differ across tasks. Recent advancements hint at the potential for universal approaches to model such tasks, referred to as Universal Information Extraction (UIE). While handling diverse tasks in one model, their generalization is limited since they are actually learning task-specific knowledge.In this study, we introduce an innovative paradigm known as TRUE-UIE, wherein all IE tasks are aligned to learn the same goals: extracting mention spans and two universal relations named $\mathtt{NEXT}$ and $\mathtt{IS}$. During the decoding process, the $\mathtt{NEXT}$ relation is utilized to group related elements, while the $\mathtt{IS}$ relation, in conjunction with structured language prompts, undertakes the role of type recognition. Additionally, we consider the sequential dependency of tokens during span extraction, an aspect often overlooked in prevalent models.Our empirical experiments indicate that TRUE-UIE achieves state-of-the-art performance on established benchmarks encompassing 16 datasets, spanning 7 diverse IE tasks. Further evaluations reveal that our approach effectively share knowledge between different IE tasks, showcasing significant transferability in zero-shot and few-shot scenarios.
[ "Wang, Yucheng", "Yu, Bowen", "Liu, Yilin", "Lu, Shudong" ]
TRUE-UIE: Two Universal Relations Unify Information Extraction Tasks
naacl-long.103
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.104.bib
https://aclanthology.org/2024.naacl-long.104/
@inproceedings{ding-etal-2024-zrllm, title = "zr{LLM}: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models", author = "Ding, Zifeng and Cai, Heling and Wu, Jingpei and Ma, Yunpu and Liao, Ruotong and Xiong, Bo and Tresp, Volker", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.104", doi = "10.18653/v1/2024.naacl-long.104", pages = "1877--1895", abstract = "Modeling evolving knowledge over temporal knowledge graphs (TKGs) has become a heated topic. Various methods have been proposed to forecast links on TKGs. Most of them are embedding-based, where hidden representations are learned to represent knowledge graph (KG) entities and relations based on the observed graph contexts. Although these methods show strong performance on traditional TKG forecasting (TKGF) benchmarks, they face a strong challenge in modeling the unseen zero-shot relations that have no prior graph context. In this paper, we try to mitigate this problem as follows. We first input the text descriptions of KG relations into large language models (LLMs) for generating relation representations, and then introduce them into embedding-based TKGF methods. LLM-empowered representations can capture the semantic information in the relation descriptions. This makes the relations, whether seen or unseen, with similar semantic meanings stay close in the embedding space, enabling TKGF models to recognize zero-shot relations even without any observed graph context. Experimental results show that our approach helps TKGF models to achieve much better performance in forecasting the facts with previously unseen relations, while still maintaining their ability in link forecasting regarding seen relations.", }
Modeling evolving knowledge over temporal knowledge graphs (TKGs) has become a heated topic. Various methods have been proposed to forecast links on TKGs. Most of them are embedding-based, where hidden representations are learned to represent knowledge graph (KG) entities and relations based on the observed graph contexts. Although these methods show strong performance on traditional TKG forecasting (TKGF) benchmarks, they face a strong challenge in modeling the unseen zero-shot relations that have no prior graph context. In this paper, we try to mitigate this problem as follows. We first input the text descriptions of KG relations into large language models (LLMs) for generating relation representations, and then introduce them into embedding-based TKGF methods. LLM-empowered representations can capture the semantic information in the relation descriptions. This makes the relations, whether seen or unseen, with similar semantic meanings stay close in the embedding space, enabling TKGF models to recognize zero-shot relations even without any observed graph context. Experimental results show that our approach helps TKGF models to achieve much better performance in forecasting the facts with previously unseen relations, while still maintaining their ability in link forecasting regarding seen relations.
[ "Ding, Zifeng", "Cai, Heling", "Wu, Jingpei", "Ma, Yunpu", "Liao, Ruotong", "Xiong, Bo", "Tresp, Volker" ]
zrLLM: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models
naacl-long.104
Oral
2311.10112
[ "https://github.com/zifengding/zrllm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.105.bib
https://aclanthology.org/2024.naacl-long.105/
@inproceedings{qiu-etal-2024-embodied, title = "Embodied Executable Policy Learning with Language-based Scene Summarization", author = "Qiu, Jielin and Xu, Mengdi and Han, William and Moon, Seungwhan and Zhao, Ding", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.105", doi = "10.18653/v1/2024.naacl-long.105", pages = "1896--1913", abstract = "Large Language models (LLMs) have shown remarkable success in assisting robot learning tasks, i.e., complex household planning.However, the performance of pretrained LLMs heavily relies on domain-specific templated text data, which may be infeasible in real-world robot learning tasks with image-based observations. Moreover, existing LLMs with text inputs lack the capability to evolve with non-expert interactions with environments.In this work, we introduce a novel learning paradigm that generates robots{'} executable actions in the form of text, derived solely from visual observations. Our proposed paradigm stands apart from previous works, which utilized either language instructions or a combination of language and visual data as inputs. We demonstrate that our proposed method can employ two fine-tuning strategies, including imitation learning and reinforcement learning approaches, to adapt to the target test tasks effectively.We conduct extensive experiments involving various model selections, environments, and tasks across 7 house layouts in the VirtualHome environment. Our experimental results demonstrate that our method surpasses existing baselines, confirming the effectiveness of this novel learning paradigm.", }
Large Language models (LLMs) have shown remarkable success in assisting robot learning tasks, i.e., complex household planning.However, the performance of pretrained LLMs heavily relies on domain-specific templated text data, which may be infeasible in real-world robot learning tasks with image-based observations. Moreover, existing LLMs with text inputs lack the capability to evolve with non-expert interactions with environments.In this work, we introduce a novel learning paradigm that generates robots{'} executable actions in the form of text, derived solely from visual observations. Our proposed paradigm stands apart from previous works, which utilized either language instructions or a combination of language and visual data as inputs. We demonstrate that our proposed method can employ two fine-tuning strategies, including imitation learning and reinforcement learning approaches, to adapt to the target test tasks effectively.We conduct extensive experiments involving various model selections, environments, and tasks across 7 house layouts in the VirtualHome environment. Our experimental results demonstrate that our method surpasses existing baselines, confirming the effectiveness of this novel learning paradigm.
[ "Qiu, Jielin", "Xu, Mengdi", "Han, William", "Moon, Seungwhan", "Zhao, Ding" ]
Embodied Executable Policy Learning with Language-based Scene Summarization
naacl-long.105
Poster
2306.05696
[ "" ]
https://huggingface.co/papers/2306.05696
3
3
0
5
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.106.bib
https://aclanthology.org/2024.naacl-long.106/
@inproceedings{wang-zhao-2024-metacognitive, title = "Metacognitive Prompting Improves Understanding in Large Language Models", author = "Wang, Yuqing and Zhao, Yun", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.106", doi = "10.18653/v1/2024.naacl-long.106", pages = "1914--1926", abstract = "In Large Language Models (LLMs), there have been consistent advancements in task-specific performance, largely influenced by effective prompt design. Recent advancements in prompting have enhanced reasoning in logic-intensive tasks for LLMs, yet the nuanced understanding abilities of these models, crucial for processing and interpreting complex information, remain underexplored. In this study, we introduce Metacognitive Prompting (MP), a strategy inspired by human introspective reasoning processes. Using MP, LLMs undergo a systematic series of structured, self-aware evaluations, drawing on both their vast inherent knowledge and new insights. We conduct extensive experiments on four prevalent LLMs: Llama2, PaLM2, GPT-3.5, and GPT-4, across ten natural language understanding (NLU) datasets from GLUE, SuperGLUE, BLUE, and LexGLUE benchmarks. Additionally, we compare our method with chain-of-thought prompting and its advanced versions. The results show that GPT-4 consistently excels across all tasks, while other models have shown significant progress in some tasks when used in conjunction with MP. Furthermore, MP consistently outperforms existing prompting methods in both general and domain-specific NLU tasks. This study underscores the potential to amplify the understanding abilities of LLMs and highlights the benefits of mirroring human introspective reasoning in NLU tasks.", }
In Large Language Models (LLMs), there have been consistent advancements in task-specific performance, largely influenced by effective prompt design. Recent advancements in prompting have enhanced reasoning in logic-intensive tasks for LLMs, yet the nuanced understanding abilities of these models, crucial for processing and interpreting complex information, remain underexplored. In this study, we introduce Metacognitive Prompting (MP), a strategy inspired by human introspective reasoning processes. Using MP, LLMs undergo a systematic series of structured, self-aware evaluations, drawing on both their vast inherent knowledge and new insights. We conduct extensive experiments on four prevalent LLMs: Llama2, PaLM2, GPT-3.5, and GPT-4, across ten natural language understanding (NLU) datasets from GLUE, SuperGLUE, BLUE, and LexGLUE benchmarks. Additionally, we compare our method with chain-of-thought prompting and its advanced versions. The results show that GPT-4 consistently excels across all tasks, while other models have shown significant progress in some tasks when used in conjunction with MP. Furthermore, MP consistently outperforms existing prompting methods in both general and domain-specific NLU tasks. This study underscores the potential to amplify the understanding abilities of LLMs and highlights the benefits of mirroring human introspective reasoning in NLU tasks.
[ "Wang, Yuqing", "Zhao, Yun" ]
Metacognitive Prompting Improves Understanding in Large Language Models
naacl-long.106
Poster
2308.05342
[ "https://github.com/eternityyw/metacognitive-prompting" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.107.bib
https://aclanthology.org/2024.naacl-long.107/
@inproceedings{ge-etal-2024-mart, title = "{MART}: Improving {LLM} Safety with Multi-round Automatic Red-Teaming", author = "Ge, Suyu and Zhou, Chunting and Hou, Rui and Khabsa, Madian and Wang, Yi-Chia and Wang, Qifan and Han, Jiawei and Mao, Yuning", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.107", doi = "10.18653/v1/2024.naacl-long.107", pages = "1927--1937", abstract = "Red-teaming is a common practice for mitigating unsafe behaviors in Large Language Models (LLMs), which involves thoroughly assessing LLMs to identify potential flaws and addressing them with responsible and accurate responses.While effective, manual red-teaming is costly, and existing automatic red-teaming typically discovers safety risks without addressing them.In this paper, we propose a Multi-round Automatic Red-Teaming (MART) method, which incorporates both automatic adversarial prompt writing and safe response generation, significantly increasing red-teaming scalability and the safety of the target LLM.Specifically, an adversarial LLM and a target LLM interplay with each other in an iterative manner, where the adversarial LLM aims to generate challenging prompts that elicit unsafe responses from the target LLM, while the target LLM is fine-tuned with safety aligned data on these adversarial prompts. In each round, the adversarial LLM crafts better attacks on the updated target LLM, while the target LLM also improves itself through safety fine-tuning.On adversarial prompt benchmarks, the violation rate of an LLM with limited safety alignment reduces up to 84.7{\%} after 4 rounds of MART, achieving comparable performance to LLMs with extensive adversarial prompt writing. Notably, model helpfulness on non-adversarial prompts remains stable throughout iterations, indicating the target LLM maintains strong performance on instruction following.", }
Red-teaming is a common practice for mitigating unsafe behaviors in Large Language Models (LLMs), which involves thoroughly assessing LLMs to identify potential flaws and addressing them with responsible and accurate responses.While effective, manual red-teaming is costly, and existing automatic red-teaming typically discovers safety risks without addressing them.In this paper, we propose a Multi-round Automatic Red-Teaming (MART) method, which incorporates both automatic adversarial prompt writing and safe response generation, significantly increasing red-teaming scalability and the safety of the target LLM.Specifically, an adversarial LLM and a target LLM interplay with each other in an iterative manner, where the adversarial LLM aims to generate challenging prompts that elicit unsafe responses from the target LLM, while the target LLM is fine-tuned with safety aligned data on these adversarial prompts. In each round, the adversarial LLM crafts better attacks on the updated target LLM, while the target LLM also improves itself through safety fine-tuning.On adversarial prompt benchmarks, the violation rate of an LLM with limited safety alignment reduces up to 84.7{\%} after 4 rounds of MART, achieving comparable performance to LLMs with extensive adversarial prompt writing. Notably, model helpfulness on non-adversarial prompts remains stable throughout iterations, indicating the target LLM maintains strong performance on instruction following.
[ "Ge, Suyu", "Zhou, Chunting", "Hou, Rui", "Khabsa, Madian", "Wang, Yi-Chia", "Wang, Qifan", "Han, Jiawei", "Mao, Yuning" ]
MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
naacl-long.107
Poster
2311.07689
[ "" ]
https://huggingface.co/papers/2311.07689
6
7
0
8
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.108.bib
https://aclanthology.org/2024.naacl-long.108/
@inproceedings{lee-etal-2024-dialogcc, title = "{D}ialog{CC}: An Automated Pipeline for Creating High-Quality Multi-Modal Dialogue Dataset", author = "Lee, Young-Jun and Ko, Byungsoo and Kim, Han-Gyu and Hyeon, Jonghwan and Choi, Ho-Jin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.108", doi = "10.18653/v1/2024.naacl-long.108", pages = "1938--1963", abstract = "As sharing images in an instant message is a crucial factor, there has been active research on learning an image-text multi-modal dialogue models.However, training a well-generalized multi-modal dialogue model remains challenging due to the low quality and limited diversity of images per dialogue in existing multi-modal dialogue datasets.In this paper, we propose an automated pipeline to construct a multi-modal dialogue dataset, ensuring both dialogue quality and image diversity without requiring minimum human effort. In our pipeline, to guarantee the coherence between images and dialogue, we prompt GPT-4 to infer potential image-sharing moments - specifically, the utterance, speaker, rationale, and image description. Furthermore, we leverage CLIP similarity to maintain consistency between aligned multiple images to the utterance.Through this pipeline, we introduce DialogCC, a high-quality and diverse multi-modal dialogue dataset that surpasses existing datasets in terms of quality and diversity in human evaluation.Our comprehensive experiments highlight that when multi-modal dialogue models are trained using our dataset, their generalization performance on unseen dialogue datasets is significantly enhanced. We make our source code and dataset publicly available (https://dialogcc.github.io/).", }
As sharing images in an instant message is a crucial factor, there has been active research on learning an image-text multi-modal dialogue models.However, training a well-generalized multi-modal dialogue model remains challenging due to the low quality and limited diversity of images per dialogue in existing multi-modal dialogue datasets.In this paper, we propose an automated pipeline to construct a multi-modal dialogue dataset, ensuring both dialogue quality and image diversity without requiring minimum human effort. In our pipeline, to guarantee the coherence between images and dialogue, we prompt GPT-4 to infer potential image-sharing moments - specifically, the utterance, speaker, rationale, and image description. Furthermore, we leverage CLIP similarity to maintain consistency between aligned multiple images to the utterance.Through this pipeline, we introduce DialogCC, a high-quality and diverse multi-modal dialogue dataset that surpasses existing datasets in terms of quality and diversity in human evaluation.Our comprehensive experiments highlight that when multi-modal dialogue models are trained using our dataset, their generalization performance on unseen dialogue datasets is significantly enhanced. We make our source code and dataset publicly available (https://dialogcc.github.io/).
[ "Lee, Young-Jun", "Ko, Byungsoo", "Kim, Han-Gyu", "Hyeon, Jonghwan", "Choi, Ho-Jin" ]
DialogCC: An Automated Pipeline for Creating High-Quality Multi-Modal Dialogue Dataset
naacl-long.108
Poster
2212.04119
[ "https://github.com/passing2961/dialogcc" ]
https://huggingface.co/papers/2212.04119
1
1
0
4
1
[]
[ "passing2961/dialogcc" ]
[]
https://aclanthology.org/2024.naacl-long.109.bib
https://aclanthology.org/2024.naacl-long.109/
@inproceedings{lu-etal-2024-routing, title = "Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models", author = "Lu, Keming and Yuan, Hongyi and Lin, Runji and Lin, Junyang and Yuan, Zheng and Zhou, Chang and Zhou, Jingren", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.109", doi = "10.18653/v1/2024.naacl-long.109", pages = "1964--1974", abstract = "The complementary potential of Large Language Models (LLM) assumes off-the-shelf LLMs have heterogeneous expertise in a wide range of domains and tasks so that an ensemble of LLMs can achieve consistently better performance. Existing ensemble methods for LLMs mainly focus on reward model ranking of outputs, leading to significant computation overhead. To combat this issue, we revisit the complementary potential of LLMs and further elaborate on it by mining latent expertise with off-the-shelf reward models. We propose ZOOTER, a reward-guided routing method distilling rewards on training queries to train a routing function, which can precisely distribute each query to the LLM with expertise about it. We also integrate a tag-based label enhancement to mitigate noise from uncertainty when using rewards as silver supervision. ZOOTER shows computation efficiency in inference as it only introduces minor computation overhead of a routing function compared with reward model ranking methods. We evaluate ZOOTER on a comprehensive benchmark collection with 26 subsets in different domains and tasks. ZOOTER outperforms the best single model on average and ranks first on 44{\%} of tasks, even surpassing multiple reward model ranking methods.", }
The complementary potential of Large Language Models (LLM) assumes off-the-shelf LLMs have heterogeneous expertise in a wide range of domains and tasks so that an ensemble of LLMs can achieve consistently better performance. Existing ensemble methods for LLMs mainly focus on reward model ranking of outputs, leading to significant computation overhead. To combat this issue, we revisit the complementary potential of LLMs and further elaborate on it by mining latent expertise with off-the-shelf reward models. We propose ZOOTER, a reward-guided routing method distilling rewards on training queries to train a routing function, which can precisely distribute each query to the LLM with expertise about it. We also integrate a tag-based label enhancement to mitigate noise from uncertainty when using rewards as silver supervision. ZOOTER shows computation efficiency in inference as it only introduces minor computation overhead of a routing function compared with reward model ranking methods. We evaluate ZOOTER on a comprehensive benchmark collection with 26 subsets in different domains and tasks. ZOOTER outperforms the best single model on average and ranks first on 44{\%} of tasks, even surpassing multiple reward model ranking methods.
[ "Lu, Keming", "Yuan, Hongyi", "Lin, Runji", "Lin, Junyang", "Yuan, Zheng", "Zhou, Chang", "Zhou, Jingren" ]
Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models
naacl-long.109
Poster
2311.08692
[ "" ]
https://huggingface.co/papers/2311.08692
5
12
0
7
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.110.bib
https://aclanthology.org/2024.naacl-long.110/
@inproceedings{liu-etal-2024-automatic, title = "Automatic Generation of Model and Data Cards: A Step Towards Responsible {AI}", author = "Liu, Jiarui and Li, Wenkai and Jin, Zhijing and Diab, Mona", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.110", doi = "10.18653/v1/2024.naacl-long.110", pages = "1975--1997", abstract = "In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.", }
In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-written model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability.
[ "Liu, Jiarui", "Li, Wenkai", "Jin, Zhijing", "Diab, Mona" ]
Automatic Generation of Model and Data Cards: A Step Towards Responsible AI
naacl-long.110
Poster
2405.06258
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.111.bib
https://aclanthology.org/2024.naacl-long.111/
@inproceedings{liu-etal-2024-fun, title = "{FUN} with Fisher: Improving Generalization of Adapter-Based Cross-lingual Transfer with Scheduled Unfreezing", author = "Liu, Chen and Pfeiffer, Jonas and Vuli{\'c}, Ivan and Gurevych, Iryna", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.111", doi = "10.18653/v1/2024.naacl-long.111", pages = "1998--2015", abstract = "Standard fine-tuning of language models typically performs well on $\textit{in-distribution data}$, but suffers with generalization to $\textit{distribution shifts}$. In this work, we aim to improve the generalization of adapter-based cross-lingual task transfer where such cross-language distribution shifts are imminent. We investigate scheduled unfreezing algorithms {--}originally proposed to mitigate catastrophic forgetting in transfer learning {--} for fine-tuning task adapters. Our experiments show that scheduled unfreezing methods close the gap to full fine-tuning and achieve stronger cross-lingual transfer performance, suggesting that these methods can go beyond just mitigating catastrophic forgetting. Next, aiming to understand these empirical findings, we investigate the learning dynamics of scheduled unfreezing using Fisher Information. Our experiments reveal that scheduled unfreezing induces different learning dynamics compared to standard fine-tuning, and provide evidence that the dynamics of Fisher Information during training correlate with cross-lingual generalization performance. We additionally propose a general scheduled unfreezing algorithm that achieves an average of 2 points improvement over four datasets compared to standard fine-tuning and provides empirical evidence for a theory-based justification of the heuristic unfreezing schedule for task adapter training.", }
Standard fine-tuning of language models typically performs well on $\textit{in-distribution data}$, but suffers with generalization to $\textit{distribution shifts}$. In this work, we aim to improve the generalization of adapter-based cross-lingual task transfer where such cross-language distribution shifts are imminent. We investigate scheduled unfreezing algorithms {--}originally proposed to mitigate catastrophic forgetting in transfer learning {--} for fine-tuning task adapters. Our experiments show that scheduled unfreezing methods close the gap to full fine-tuning and achieve stronger cross-lingual transfer performance, suggesting that these methods can go beyond just mitigating catastrophic forgetting. Next, aiming to understand these empirical findings, we investigate the learning dynamics of scheduled unfreezing using Fisher Information. Our experiments reveal that scheduled unfreezing induces different learning dynamics compared to standard fine-tuning, and provide evidence that the dynamics of Fisher Information during training correlate with cross-lingual generalization performance. We additionally propose a general scheduled unfreezing algorithm that achieves an average of 2 points improvement over four datasets compared to standard fine-tuning and provides empirical evidence for a theory-based justification of the heuristic unfreezing schedule for task adapter training.
[ "Liu, Chen", "Pfeiffer, Jonas", "Vuli{\\'c}, Ivan", "Gurevych, Iryna" ]
FUN with Fisher: Improving Generalization of Adapter-Based Cross-lingual Transfer with Scheduled Unfreezing
naacl-long.111
Poster
2301.05487
[ "https://github.com/ukplab/naacl2024-fun" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.112.bib
https://aclanthology.org/2024.naacl-long.112/
@inproceedings{liu-etal-2024-multilingual, title = "Are Multilingual {LLM}s Culturally-Diverse Reasoners? An Investigation into Multicultural Proverbs and Sayings", author = "Liu, Chen and Koto, Fajri and Baldwin, Timothy and Gurevych, Iryna", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.112", doi = "10.18653/v1/2024.naacl-long.112", pages = "2016--2039", abstract = "Large language models (LLMs) are highly adept at question answering and reasoning tasks, but when reasoning in a situational context, human expectations vary depending on the relevant cultural common ground. As languages are associated with diverse cultures, LLMs should also be culturally-diverse reasoners. In this paper, we study the ability of a wide range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and sayings in a conversational context. Our experiments reveal that: (1) mLLMs {``}know{''} limited proverbs and memorizing proverbs does not mean understanding them within a conversational context; (2) mLLMs struggle to reason with figurative proverbs and sayings, and when asked to select the wrong answer (instead of asking it to select the correct answer); and (3) there is a {``}culture gap{''} in mLLMs when reasoning about proverbs and sayings translated from other languages. We construct and release our evaluation dataset MAPS (MulticulturAl Proverbs and Sayings) for proverb understanding with conversational context for six different languages.", }
Large language models (LLMs) are highly adept at question answering and reasoning tasks, but when reasoning in a situational context, human expectations vary depending on the relevant cultural common ground. As languages are associated with diverse cultures, LLMs should also be culturally-diverse reasoners. In this paper, we study the ability of a wide range of state-of-the-art multilingual LLMs (mLLMs) to reason with proverbs and sayings in a conversational context. Our experiments reveal that: (1) mLLMs {``}know{''} limited proverbs and memorizing proverbs does not mean understanding them within a conversational context; (2) mLLMs struggle to reason with figurative proverbs and sayings, and when asked to select the wrong answer (instead of asking it to select the correct answer); and (3) there is a {``}culture gap{''} in mLLMs when reasoning about proverbs and sayings translated from other languages. We construct and release our evaluation dataset MAPS (MulticulturAl Proverbs and Sayings) for proverb understanding with conversational context for six different languages.
[ "Liu, Chen", "Koto, Fajri", "Baldwin, Timothy", "Gurevych, Iryna" ]
Are Multilingual LLMs Culturally-Diverse Reasoners? An Investigation into Multicultural Proverbs and Sayings
naacl-long.112
Poster
2309.08591
[ "https://github.com/UKPLab/maps" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.113.bib
https://aclanthology.org/2024.naacl-long.113/
@inproceedings{lissak-etal-2024-colorful, title = "The Colorful Future of {LLM}s: Evaluating and Improving {LLM}s as Emotional Supporters for Queer Youth", author = "Lissak, Shir and Calderon, Nitay and Shenkman, Geva and Ophir, Yaakov and Fruchter, Eyal and Brunstein Klomek, Anat and Reichart, Roi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.113", doi = "10.18653/v1/2024.naacl-long.113", pages = "2040--2079", abstract = "Queer youth face increased mental health risks, such as depression, anxiety, and suicidal ideation. Hindered by negative stigma, they often avoid seeking help and rely on online resources, which may provide incompatible information. Although access to a supportive environment and reliable information is invaluable, many queer youth worldwide have no access to such support. However, this could soon change due to the rapid adoption of Large Language Models (LLMs) such as ChatGPT. This paper aims to comprehensively explore the potential of LLMs to revolutionize emotional support for queers. To this end, we conduct a qualitative and quantitative analysis of LLM{'}s interactions with queer-related content. To evaluate response quality, we develop a novel ten-question scale that is inspired by psychological standards and expert input. We apply this scale to score several LLMs and human comments to posts where queer youth seek advice and share experiences. We find that LLM responses are supportive and inclusive, outscoring humans. However, they tend to be generic, not empathetic enough, and lack personalization, resulting in nonreliable and potentially harmful advice. We discuss these challenges, demonstrate that a dedicated prompt can improve the performance, and propose a blueprint of an LLM-supporter that actively (but sensitively) seeks user context to provide personalized, empathetic, and reliable responses. Our annotated dataset is available for further research.*https://github.com/nitaytech/LGBTeenDataset", }
Queer youth face increased mental health risks, such as depression, anxiety, and suicidal ideation. Hindered by negative stigma, they often avoid seeking help and rely on online resources, which may provide incompatible information. Although access to a supportive environment and reliable information is invaluable, many queer youth worldwide have no access to such support. However, this could soon change due to the rapid adoption of Large Language Models (LLMs) such as ChatGPT. This paper aims to comprehensively explore the potential of LLMs to revolutionize emotional support for queers. To this end, we conduct a qualitative and quantitative analysis of LLM{'}s interactions with queer-related content. To evaluate response quality, we develop a novel ten-question scale that is inspired by psychological standards and expert input. We apply this scale to score several LLMs and human comments to posts where queer youth seek advice and share experiences. We find that LLM responses are supportive and inclusive, outscoring humans. However, they tend to be generic, not empathetic enough, and lack personalization, resulting in nonreliable and potentially harmful advice. We discuss these challenges, demonstrate that a dedicated prompt can improve the performance, and propose a blueprint of an LLM-supporter that actively (but sensitively) seeks user context to provide personalized, empathetic, and reliable responses. Our annotated dataset is available for further research.*https://github.com/nitaytech/LGBTeenDataset
[ "Lissak, Shir", "Calderon, Nitay", "Shenkman, Geva", "Ophir, Yaakov", "Fruchter, Eyal", "Brunstein Klomek, Anat", "Reichart, Roi" ]
The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional Supporters for Queer Youth
naacl-long.113
Poster
2402.11886
[ "https://github.com/nitaytech/lgbteendataset" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.114.bib
https://aclanthology.org/2024.naacl-long.114/
@inproceedings{zhao-etal-2024-iped, title = "{IPED}: An Implicit Perspective for Relational Triple Extraction based on Diffusion Model", author = "Zhao, Jianli and Xu, Changhao and Jiang, Bin.", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.114", doi = "10.18653/v1/2024.naacl-long.114", pages = "2080--2092", abstract = "Relational triple extraction is a fundamental task in the field of information extraction, and a promising framework based on table filling has recently gained attention as a potential baseline for entity relation extraction. However, inherent shortcomings such as redundant information and incomplete triple recognition remain problematic. To address these challenges, we propose an Implicit Perspective for relational triple Extraction based on Diffusion model (IPED), an innovative approach for extracting relational triples. Our classifier-free solution adopts an implicit strategy using block coverage to complete the tables, avoiding the limitations of explicit tagging methods. Additionally, we introduce a generative model structure, the block-denoising diffusion model, to collaborate with our implicit perspective and effectively circumvent redundant information disruptions. Experimental results on two popular datasets demonstrate that IPED achieves state-of-the-art performance while gaining superior inference speed and low computational complexity. To support future research, we have made our source code publicly available online.", }
Relational triple extraction is a fundamental task in the field of information extraction, and a promising framework based on table filling has recently gained attention as a potential baseline for entity relation extraction. However, inherent shortcomings such as redundant information and incomplete triple recognition remain problematic. To address these challenges, we propose an Implicit Perspective for relational triple Extraction based on Diffusion model (IPED), an innovative approach for extracting relational triples. Our classifier-free solution adopts an implicit strategy using block coverage to complete the tables, avoiding the limitations of explicit tagging methods. Additionally, we introduce a generative model structure, the block-denoising diffusion model, to collaborate with our implicit perspective and effectively circumvent redundant information disruptions. Experimental results on two popular datasets demonstrate that IPED achieves state-of-the-art performance while gaining superior inference speed and low computational complexity. To support future research, we have made our source code publicly available online.
[ "Zhao, Jianli", "Xu, Changhao", "Jiang, Bin." ]
IPED: An Implicit Perspective for Relational Triple Extraction based on Diffusion Model
naacl-long.114
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.115.bib
https://aclanthology.org/2024.naacl-long.115/
@inproceedings{murahari-etal-2024-qualeval, title = "{Q}ual{E}val: Qualitative Evaluation for Model Improvement", author = "Murahari, Vishvak and Deshpande, Ameet and Clark, Peter and Rajpurohit, Tanmay and Sabharwal, Ashish and Narasimhan, Karthik and Kalyan, Ashwin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.115", doi = "10.18653/v1/2024.naacl-long.115", pages = "2093--2111", abstract = "Quantitative evaluation metrics have been pivotal in gauging the advancements of AI systems like large language models (LLMs).However, due to the intricate nature of real-world tasks, a single scalar to quantify and compare performance trivializes the fine-grained nuances of model behavior. Additionally, metrics do not yield actionable diagnostics for model improvement, thus requiring extensive manual efforts of scientists, involving sifting through vast datasets and attempting hit-or-miss adjustments to training data or setups. In this work, we address the shortcomings of quantitative metrics by proposing QualEval, which uses automated qualitative evaluation as a vehicle for model improvement. QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights that when applied, accelerate model improvement. The insights are supported by a dashboard report with fine-grained visualizations and human-interpretable analyses. We corroborate the faithfulness of QualEval by demonstrating that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15{\%} points relative on a challenging dialogue task (DialogSum) when compared to baselines. QualEval successfully increases the pace and quality of model development by eliminating the need of arduous manual analysis, thus serving as a data-scientist-in-a-box.", }
Quantitative evaluation metrics have been pivotal in gauging the advancements of AI systems like large language models (LLMs).However, due to the intricate nature of real-world tasks, a single scalar to quantify and compare performance trivializes the fine-grained nuances of model behavior. Additionally, metrics do not yield actionable diagnostics for model improvement, thus requiring extensive manual efforts of scientists, involving sifting through vast datasets and attempting hit-or-miss adjustments to training data or setups. In this work, we address the shortcomings of quantitative metrics by proposing QualEval, which uses automated qualitative evaluation as a vehicle for model improvement. QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights that when applied, accelerate model improvement. The insights are supported by a dashboard report with fine-grained visualizations and human-interpretable analyses. We corroborate the faithfulness of QualEval by demonstrating that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15{\%} points relative on a challenging dialogue task (DialogSum) when compared to baselines. QualEval successfully increases the pace and quality of model development by eliminating the need of arduous manual analysis, thus serving as a data-scientist-in-a-box.
[ "Murahari, Vishvak", "Deshp", "e, Ameet", "Clark, Peter", "Rajpurohit, Tanmay", "Sabharwal, Ashish", "Narasimhan, Karthik", "Kalyan, Ashwin" ]
QualEval: Qualitative Evaluation for Model Improvement
naacl-long.115
Oral
2311.02807
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.116.bib
https://aclanthology.org/2024.naacl-long.116/
@inproceedings{yan-etal-2024-quantum, title = "Quantum-inspired Language Model with Lindblad Master Equation and Interference Measurement for Sentiment Analysis", author = "Yan, Kehuan and Lai, Peichao and Wang, Yilei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.116", doi = "10.18653/v1/2024.naacl-long.116", pages = "2112--2121", abstract = "Quantum-inspired models have demonstrated superior performance in many downstream language tasks, such as question answering and sentiment analysis. However, recent models primarily focus on embedding and measurement operations, overlooking the significance of the quantum evolution process. In this work, we present a novel quantum-inspired neural network, LI-QiLM, which integrates the Lindblad Master Equation (LME) to model the evolution process and the interferometry to the measurement process, providing more physical meaning to strengthen the interpretability. We conduct comprehensive experiments on six sentiment analysis datasets. Compared to the traditional neural networks, transformer-based pre-trained models and quantum-inspired models, such as CICWE-QNN and ComplexQNN, the proposed method demonstrates superior performance in accuracy and F1-score on six commonly used datasets for sentiment analysis. Additional ablation tests verify the effectiveness of LME and interferometry.", }
Quantum-inspired models have demonstrated superior performance in many downstream language tasks, such as question answering and sentiment analysis. However, recent models primarily focus on embedding and measurement operations, overlooking the significance of the quantum evolution process. In this work, we present a novel quantum-inspired neural network, LI-QiLM, which integrates the Lindblad Master Equation (LME) to model the evolution process and the interferometry to the measurement process, providing more physical meaning to strengthen the interpretability. We conduct comprehensive experiments on six sentiment analysis datasets. Compared to the traditional neural networks, transformer-based pre-trained models and quantum-inspired models, such as CICWE-QNN and ComplexQNN, the proposed method demonstrates superior performance in accuracy and F1-score on six commonly used datasets for sentiment analysis. Additional ablation tests verify the effectiveness of LME and interferometry.
[ "Yan, Kehuan", "Lai, Peichao", "Wang, Yilei" ]
Quantum-inspired Language Model with Lindblad Master Equation and Interference Measurement for Sentiment Analysis
naacl-long.116
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.117.bib
https://aclanthology.org/2024.naacl-long.117/
@inproceedings{zhu-etal-2024-vislinginstruct, title = "{V}is{L}ing{I}nstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction Optimization", author = "Zhu, Dongsheng and Tang, Daniel and Han, Weidong and Lu, Jinghui and Zhao, Yukun and Xing, Guoliang and Wang, Junfeng and Yin, Dawei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.117", doi = "10.18653/v1/2024.naacl-long.117", pages = "2122--2135", abstract = "This paper presents VisLingInstruct, a novel approach to advancing Multi-Modal Language Models (MMLMs) in zero-shot learning. Current MMLMs show impressive zero-shot abilities in multi-modal tasks, but their performance depends heavily on the quality of instructions. VisLingInstruct tackles this by autonomously evaluating and optimizing instructional texts through In-Context Learning, improving the synergy between visual perception and linguistic expression in MMLMs. Alongside this instructional advancement, we have also optimized the visual feature extraction modules in MMLMs, further augmenting their responsiveness to textual content. Our comprehensive experiments on MMLMs, based on FlanT5 and Vicuna, show that VisLingInstruct significantly improves zero-shot performance in visual multi-modal tasks. Notably, it achieves a 13.1{\%} and 9{\%} increase in accuracy over the prior state-of-the-art on the TextVQA and HatefulMemes datasets. Our main code is available at https://github.com/Zhudongsheng75/VisLingInstruct", }
This paper presents VisLingInstruct, a novel approach to advancing Multi-Modal Language Models (MMLMs) in zero-shot learning. Current MMLMs show impressive zero-shot abilities in multi-modal tasks, but their performance depends heavily on the quality of instructions. VisLingInstruct tackles this by autonomously evaluating and optimizing instructional texts through In-Context Learning, improving the synergy between visual perception and linguistic expression in MMLMs. Alongside this instructional advancement, we have also optimized the visual feature extraction modules in MMLMs, further augmenting their responsiveness to textual content. Our comprehensive experiments on MMLMs, based on FlanT5 and Vicuna, show that VisLingInstruct significantly improves zero-shot performance in visual multi-modal tasks. Notably, it achieves a 13.1{\%} and 9{\%} increase in accuracy over the prior state-of-the-art on the TextVQA and HatefulMemes datasets. Our main code is available at https://github.com/Zhudongsheng75/VisLingInstruct
[ "Zhu, Dongsheng", "Tang, Daniel", "Han, Weidong", "Lu, Jinghui", "Zhao, Yukun", "Xing, Guoliang", "Wang, Junfeng", "Yin, Dawei" ]
VisLingInstruct: Elevating Zero-Shot Learning in Multi-Modal Language Models with Autonomous Instruction Optimization
naacl-long.117
Poster
2402.07398
[ "https://github.com/zhudongsheng75/vislinginstruct" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.118.bib
https://aclanthology.org/2024.naacl-long.118/
@inproceedings{ding-etal-2024-wolf, title = "A Wolf in Sheep{'}s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily", author = "Ding, Peng and Kuang, Jun and Ma, Dan and Cao, Xuezhi and Xian, Yunsen and Chen, Jiajun and Huang, Shujian", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.118", doi = "10.18653/v1/2024.naacl-long.118", pages = "2136--2153", abstract = "Large Language Models (LLMs), such as ChatGPT and GPT-4, are designed to provide useful and safe responses. However, adversarial prompts known as {`}jailbreaks{'} can circumvent safeguards, leading LLMs to generate potentially harmful content. Exploring jailbreak prompts can help to better reveal the weaknesses of LLMs and further steer us to secure them. Unfortunately, existing jailbreak methods either suffer from intricate manual design or require optimization on other white-box models, which compromises either generalization or efficiency. In this paper, we generalize jailbreak prompt attacks into two aspects: (1) Prompt Rewriting and (2) Scenario Nesting. Based on this, we propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts. Extensive experiments demonstrate that ReNeLLM significantly improves the attack success rate while greatly reducing the time cost compared to existing baselines. Our study also reveals the inadequacy of current defense methods in safeguarding LLMs. Finally, we analyze the failure of LLMs defense from the perspective of prompt execution priority, and propose corresponding defense strategies. We hope that our research can catalyze both the academic community and LLMs developers towards the provision of safer and more regulated LLMs. The code is available at https://github.com/NJUNLP/ReNeLLM.", }
Large Language Models (LLMs), such as ChatGPT and GPT-4, are designed to provide useful and safe responses. However, adversarial prompts known as {`}jailbreaks{'} can circumvent safeguards, leading LLMs to generate potentially harmful content. Exploring jailbreak prompts can help to better reveal the weaknesses of LLMs and further steer us to secure them. Unfortunately, existing jailbreak methods either suffer from intricate manual design or require optimization on other white-box models, which compromises either generalization or efficiency. In this paper, we generalize jailbreak prompt attacks into two aspects: (1) Prompt Rewriting and (2) Scenario Nesting. Based on this, we propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts. Extensive experiments demonstrate that ReNeLLM significantly improves the attack success rate while greatly reducing the time cost compared to existing baselines. Our study also reveals the inadequacy of current defense methods in safeguarding LLMs. Finally, we analyze the failure of LLMs defense from the perspective of prompt execution priority, and propose corresponding defense strategies. We hope that our research can catalyze both the academic community and LLMs developers towards the provision of safer and more regulated LLMs. The code is available at https://github.com/NJUNLP/ReNeLLM.
[ "Ding, Peng", "Kuang, Jun", "Ma, Dan", "Cao, Xuezhi", "Xian, Yunsen", "Chen, Jiajun", "Huang, Shujian" ]
A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily
naacl-long.118
Oral
2311.08268
[ "https://github.com/NJUNLP/ReNeLLM" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.119.bib
https://aclanthology.org/2024.naacl-long.119/
@inproceedings{liu-etal-2024-p3sum, title = "{P}$^3${S}um: Preserving Author{'}s Perspective in News Summarization with Diffusion Language Models", author = "Liu, Yuhan and Feng, Shangbin and Han, Xiaochuang and Balachandran, Vidhisha and Park, Chan Young and Kumar, Sachin and Tsvetkov, Yulia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.119", doi = "10.18653/v1/2024.naacl-long.119", pages = "2154--2173", abstract = "In this work, we take a first step towards designing summarization systems that are faithful to the author{'}s intent, not only the semantic content of the article. Focusing on a case study of preserving political perspectives in news summarization, we find that existing approaches alter the political opinions and stances of news articles in more than 50{\%} of summaries, misrepresenting the intent and perspectives of the news authors. We thus propose P$^3$Sum, a diffusion model-based summarization approach controlled by political perspective classifiers. In P$^3$Sum, the political leaning of a generated summary is iteratively evaluated at each decoding step, and any drift from the article{'}s original stance incurs a loss back-propagated to the embedding layers, steering the political stance of the summary at inference time. Extensive experiments on three news summarization datasets demonstrate that P$^3$Sum outperforms state-of-the-art summarization systems and large language models by up to 13.7{\%} in terms of the success rate of stance preservation, with competitive performance on standard metrics of summarization quality. Our findings present a first analysis of preservation of pragmatic features in summarization, highlight the lacunae in existing summarization models{---}that even state-of-the-art models often struggle to preserve author{'}s intents{---}and develop new summarization systems that are more faithful to author{'}s perspectives.", }
In this work, we take a first step towards designing summarization systems that are faithful to the author{'}s intent, not only the semantic content of the article. Focusing on a case study of preserving political perspectives in news summarization, we find that existing approaches alter the political opinions and stances of news articles in more than 50{\%} of summaries, misrepresenting the intent and perspectives of the news authors. We thus propose P$^3$Sum, a diffusion model-based summarization approach controlled by political perspective classifiers. In P$^3$Sum, the political leaning of a generated summary is iteratively evaluated at each decoding step, and any drift from the article{'}s original stance incurs a loss back-propagated to the embedding layers, steering the political stance of the summary at inference time. Extensive experiments on three news summarization datasets demonstrate that P$^3$Sum outperforms state-of-the-art summarization systems and large language models by up to 13.7{\%} in terms of the success rate of stance preservation, with competitive performance on standard metrics of summarization quality. Our findings present a first analysis of preservation of pragmatic features in summarization, highlight the lacunae in existing summarization models{---}that even state-of-the-art models often struggle to preserve author{'}s intents{---}and develop new summarization systems that are more faithful to author{'}s perspectives.
[ "Liu, Yuhan", "Feng, Shangbin", "Han, Xiaochuang", "Balach", "ran, Vidhisha", "Park, Chan Young", "Kumar, Sachin", "Tsvetkov, Yulia" ]
P^3Sum: Preserving Author's Perspective in News Summarization with Diffusion Language Models
naacl-long.119
Poster
2311.09741
[ "https://github.com/lyh6560new/p3sum" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.120.bib
https://aclanthology.org/2024.naacl-long.120/
@inproceedings{wang-etal-2024-bridging, title = "Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes", author = "Wang, Rose and Zhang, Qingyang and Robinson, Carly and Loeb, Susanna and Demszky, Dorottya", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.120", doi = "10.18653/v1/2024.naacl-long.120", pages = "2174--2199", abstract = "Scaling high-quality tutoring remains a major challenge in education. Due to growing demand, many platforms employ novice tutors who, unlike experienced educators, struggle to address student mistakes and thus fail to seize prime learning opportunities. Our work explores the potential of large language models (LLMs) to close the novice-expert knowledge gap in remediating math mistakes. We contribute Bridge, a method that uses cognitive task analysis to translate an expert{'}s latent thought process into a decision-making model for remediation. This involves an expert identifying (A) the student{'}s error, (B) a remediation strategy, and (C) their intention before generating a response. We construct a dataset of 700 real tutoring conversations, annotated by experts with their decisions. We evaluate state-of-the-art LLMs on our dataset and find that the expert{'}s decision-making model is critical for LLMs to close the gap: responses from GPT4 with expert decisions (e.g., {``}simplify the problem{''}) are +76{\%} more preferred than without. Additionally, context-sensitive decisions are critical to closing pedagogical gaps: random decisions decrease GPT4{'}s response quality by -97{\%} than expert decisions. Our work shows the potential of embedding expert thought processes in LLM generations to enhance their capability to bridge novice-expert knowledge gaps. Our dataset and code can be found at: https://github.com/rosewang2008/bridge.", }
Scaling high-quality tutoring remains a major challenge in education. Due to growing demand, many platforms employ novice tutors who, unlike experienced educators, struggle to address student mistakes and thus fail to seize prime learning opportunities. Our work explores the potential of large language models (LLMs) to close the novice-expert knowledge gap in remediating math mistakes. We contribute Bridge, a method that uses cognitive task analysis to translate an expert{'}s latent thought process into a decision-making model for remediation. This involves an expert identifying (A) the student{'}s error, (B) a remediation strategy, and (C) their intention before generating a response. We construct a dataset of 700 real tutoring conversations, annotated by experts with their decisions. We evaluate state-of-the-art LLMs on our dataset and find that the expert{'}s decision-making model is critical for LLMs to close the gap: responses from GPT4 with expert decisions (e.g., {``}simplify the problem{''}) are +76{\%} more preferred than without. Additionally, context-sensitive decisions are critical to closing pedagogical gaps: random decisions decrease GPT4{'}s response quality by -97{\%} than expert decisions. Our work shows the potential of embedding expert thought processes in LLM generations to enhance their capability to bridge novice-expert knowledge gaps. Our dataset and code can be found at: https://github.com/rosewang2008/bridge.
[ "Wang, Rose", "Zhang, Qingyang", "Robinson, Carly", "Loeb, Susanna", "Demszky, Dorottya" ]
Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes
naacl-long.120
Poster
2310.10648
[ "https://github.com/rosewang2008/bridge" ]
https://huggingface.co/papers/2310.10648
1
0
0
5
1
[]
[ "rose-e-wang/bridge" ]
[]
https://aclanthology.org/2024.naacl-long.121.bib
https://aclanthology.org/2024.naacl-long.121/
@inproceedings{pu-demberg-2024-rst, title = "{RST}-{L}o{RA}: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization", author = "Pu, Dongqi and Demberg, Vera", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.121", doi = "10.18653/v1/2024.naacl-long.121", pages = "2200--2220", abstract = "For long document summarization, discourse structure is important to discern the key content of the text and the differences in importance level between sentences. Unfortunately, the integration of rhetorical structure theory (RST) into parameter-efficient fine-tuning strategies for long document summarization remains unexplored. Therefore, this paper introduces RST-LoRA and proposes four RST-aware variants to explicitly incorporate RST into the LoRA model. Our empirical evaluation demonstrates that incorporating the type and uncertainty of rhetorical relations can complementarily enhance the performance of LoRA in summarization tasks. Furthermore, the best-performing variant we introduced outperforms the vanilla LoRA and full-parameter fine-tuning models, as confirmed by multiple automatic and human evaluations, and even surpasses previous state-of-the-art methods.", }
For long document summarization, discourse structure is important to discern the key content of the text and the differences in importance level between sentences. Unfortunately, the integration of rhetorical structure theory (RST) into parameter-efficient fine-tuning strategies for long document summarization remains unexplored. Therefore, this paper introduces RST-LoRA and proposes four RST-aware variants to explicitly incorporate RST into the LoRA model. Our empirical evaluation demonstrates that incorporating the type and uncertainty of rhetorical relations can complementarily enhance the performance of LoRA in summarization tasks. Furthermore, the best-performing variant we introduced outperforms the vanilla LoRA and full-parameter fine-tuning models, as confirmed by multiple automatic and human evaluations, and even surpasses previous state-of-the-art methods.
[ "Pu, Dongqi", "Demberg, Vera" ]
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization
naacl-long.121
Oral
2405.00657
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.122.bib
https://aclanthology.org/2024.naacl-long.122/
@inproceedings{lu-etal-2024-strings, title = "Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation", author = "Lu, Yao and Wang, Jiayi and Tang, Raphael and Riedel, Sebastian and Stenetorp, Pontus", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.122", doi = "10.18653/v1/2024.naacl-long.122", pages = "2221--2231", abstract = "Recent prompt optimisation approaches use the generative nature of language models to produce prompts {--} even rivaling the performance of human-curated prompts. In this paper, we demonstrate that randomly sampling tokens from the model vocabulary as {``}separators{''} can be as effective as language models for prompt-style text classification. Our experiments show that random separators are competitive baselines, having less than a 1{\%} difference compared to previous self-optimisation methods and showing a 12{\%} average relative improvement over strong human baselines across nine text classification tasks and eight language models. We further analyse this phenomenon in detail using three different random generation strategies, establishing that the language space is rich with potentially good separators, with a greater than 40{\%} average chance that a randomly drawn separator performs better than human-curated separators. These observations challenge the common assumption that an effective prompt should be human readable or task relevant and establish a strong baseline for prompt optimisation research.", }
Recent prompt optimisation approaches use the generative nature of language models to produce prompts {--} even rivaling the performance of human-curated prompts. In this paper, we demonstrate that randomly sampling tokens from the model vocabulary as {``}separators{''} can be as effective as language models for prompt-style text classification. Our experiments show that random separators are competitive baselines, having less than a 1{\%} difference compared to previous self-optimisation methods and showing a 12{\%} average relative improvement over strong human baselines across nine text classification tasks and eight language models. We further analyse this phenomenon in detail using three different random generation strategies, establishing that the language space is rich with potentially good separators, with a greater than 40{\%} average chance that a randomly drawn separator performs better than human-curated separators. These observations challenge the common assumption that an effective prompt should be human readable or task relevant and establish a strong baseline for prompt optimisation research.
[ "Lu, Yao", "Wang, Jiayi", "Tang, Raphael", "Riedel, Sebastian", "Stenetorp, Pontus" ]
Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation
naacl-long.122
Poster
2311.09569
[ "https://github.com/yaolu/random-prompt" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.123.bib
https://aclanthology.org/2024.naacl-long.123/
@inproceedings{duan-etal-2024-reta, title = "{R}e{TA}: Recursively Thinking Ahead to Improve the Strategic Reasoning of Large Language Models", author = "Duan, Jinhao and Wang, Shiqi and Diffenderfer, James and Sun, Lichao and Chen, Tianlong and Kailkhura, Bhavya and Xu, Kaidi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.123", doi = "10.18653/v1/2024.naacl-long.123", pages = "2232--2246", abstract = "Current logical reasoning evaluations of Large Language Models (LLMs) primarily focus on single-turn and static environments, such as arithmetic problems. The crucial problem of multi-turn, strategic reasoning is under-explored. In this work, we analyze the multi-turn strategic reasoning of LLMs through text-driven complete- and incomplete-information gaming, e.g., board games (Tic-Tac-Toe, Connect-4) and poker games (Texas Hold{'}em Poker). Specifically, we consider two distinct scenarios: 1) Online Racing, featuring multiple LLMs/agents to facilitate direct competition and comparison; 2) Offline Probing, constructing targeted questions with verified ground truth to evaluate LLMs{'} strategic behaviors. Experimental results demonstrate that existing state-of-the-art LLMs and reasoning schemes are largely ineffective for strategic reasoning tasks. To mitigate these limitations, we propose a simple yet effective Recursively Thinking-Ahead (ReTA) agent, incorporating a recursive prompting mechanism that automatically analyzes the opponents{'} future moves/actions and assigns reward signals for these situations, to strengthen the strategic reasoning of LLMs. We hope our work could spur further research and exploration in the multi-turn strategic reasoning of LLMs. The code is available at https://github.com/jinhaoduan/ReTA.", }
Current logical reasoning evaluations of Large Language Models (LLMs) primarily focus on single-turn and static environments, such as arithmetic problems. The crucial problem of multi-turn, strategic reasoning is under-explored. In this work, we analyze the multi-turn strategic reasoning of LLMs through text-driven complete- and incomplete-information gaming, e.g., board games (Tic-Tac-Toe, Connect-4) and poker games (Texas Hold{'}em Poker). Specifically, we consider two distinct scenarios: 1) Online Racing, featuring multiple LLMs/agents to facilitate direct competition and comparison; 2) Offline Probing, constructing targeted questions with verified ground truth to evaluate LLMs{'} strategic behaviors. Experimental results demonstrate that existing state-of-the-art LLMs and reasoning schemes are largely ineffective for strategic reasoning tasks. To mitigate these limitations, we propose a simple yet effective Recursively Thinking-Ahead (ReTA) agent, incorporating a recursive prompting mechanism that automatically analyzes the opponents{'} future moves/actions and assigns reward signals for these situations, to strengthen the strategic reasoning of LLMs. We hope our work could spur further research and exploration in the multi-turn strategic reasoning of LLMs. The code is available at https://github.com/jinhaoduan/ReTA.
[ "Duan, Jinhao", "Wang, Shiqi", "Diffenderfer, James", "Sun, Lichao", "Chen, Tianlong", "Kailkhura, Bhavya", "Xu, Kaidi" ]
ReTA: Recursively Thinking Ahead to Improve the Strategic Reasoning of Large Language Models
naacl-long.123
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.124.bib
https://aclanthology.org/2024.naacl-long.124/
@inproceedings{karisani-ji-2024-fact, title = "Fact Checking Beyond Training Set", author = "Karisani, Payam and Ji, Heng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.124", doi = "10.18653/v1/2024.naacl-long.124", pages = "2247--2261", abstract = "Evaluating the veracity of everyday claims is time consuming and in some cases requires domain expertise. We empirically demonstrate that the commonly used fact checking pipeline, known as the retriever-reader, suffers from performance deterioration when it is trained on the labeled data from one domain and used in another domain. Afterwards, we delve into each component of the pipeline and propose novel algorithms to address this problem. We propose an adversarial algorithm to make the retriever component robust against distribution shift. Our core idea is to initially train a bi-encoder on the labeled source data, and then, to adversarially train two separate document and claim encoders using unlabeled target data. We then focus on the reader component and propose to train it such that it is insensitive towards the order of claims and evidence documents. Our empirical evaluations support the hypothesis that such a reader shows a higher robustness against distribution shift. To our knowledge, there is no publicly available multi-topic fact checking dataset. Thus, we propose a simple automatic method to re-purpose two well-known fact checking datasets. We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models, including recent domain adaptation models that use GPT4 for generating synthetic data.", }
Evaluating the veracity of everyday claims is time consuming and in some cases requires domain expertise. We empirically demonstrate that the commonly used fact checking pipeline, known as the retriever-reader, suffers from performance deterioration when it is trained on the labeled data from one domain and used in another domain. Afterwards, we delve into each component of the pipeline and propose novel algorithms to address this problem. We propose an adversarial algorithm to make the retriever component robust against distribution shift. Our core idea is to initially train a bi-encoder on the labeled source data, and then, to adversarially train two separate document and claim encoders using unlabeled target data. We then focus on the reader component and propose to train it such that it is insensitive towards the order of claims and evidence documents. Our empirical evaluations support the hypothesis that such a reader shows a higher robustness against distribution shift. To our knowledge, there is no publicly available multi-topic fact checking dataset. Thus, we propose a simple automatic method to re-purpose two well-known fact checking datasets. We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models, including recent domain adaptation models that use GPT4 for generating synthetic data.
[ "Karisani, Payam", "Ji, Heng" ]
Fact Checking Beyond Training Set
naacl-long.124
Poster
2403.18671
[ "https://github.com/p-karisani/oodfc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.125.bib
https://aclanthology.org/2024.naacl-long.125/
@inproceedings{kabra-etal-2024-program, title = "Program-Aided Reasoners (Better) Know What They Know", author = "Kabra, Anubha and Rangreji, Sanketh and Mathur, Yash and Madaan, Aman and Liu, Emmy and Neubig, Graham", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.125", doi = "10.18653/v1/2024.naacl-long.125", pages = "2262--2278", abstract = "Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to {``}know what they know{''}, which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types - LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75{\%} of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts.", }
Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to {``}know what they know{''}, which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types - LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75{\%} of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts.
[ "Kabra, Anubha", "Rangreji, Sanketh", "Mathur, Yash", "Madaan, Aman", "Liu, Emmy", "Neubig, Graham" ]
Program-Aided Reasoners (Better) Know What They Know
naacl-long.125
Poster
2311.09553
[ "https://github.com/mathuryash5/code-calibrates" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.126.bib
https://aclanthology.org/2024.naacl-long.126/
@inproceedings{fleisig-etal-2024-perspectivist, title = "The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels", author = "Fleisig, Eve and Blodgett, Su Lin and Klein, Dan and Talat, Zeerak", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.126", doi = "10.18653/v1/2024.naacl-long.126", pages = "2279--2292", abstract = "Longstanding data labeling practices in machine learning involve collecting and aggregating labels from multiple annotators. But what should we do when annotators disagree? Though annotator disagreement has long been seen as a problem to minimize, new perspectivist approaches challenge this assumption by treating disagreement as a valuable source of information. In this position paper, we examine practices and assumptions surrounding the causes of disagreement{--}some challenged by perspectivist approaches, and some that remain to be addressed{--}as well as practical and normative challenges for work operating under these assumptions. We conclude with recommendations for the data labeling pipeline and avenues for future research engaging with subjectivity and disagreement.", }
Longstanding data labeling practices in machine learning involve collecting and aggregating labels from multiple annotators. But what should we do when annotators disagree? Though annotator disagreement has long been seen as a problem to minimize, new perspectivist approaches challenge this assumption by treating disagreement as a valuable source of information. In this position paper, we examine practices and assumptions surrounding the causes of disagreement{--}some challenged by perspectivist approaches, and some that remain to be addressed{--}as well as practical and normative challenges for work operating under these assumptions. We conclude with recommendations for the data labeling pipeline and avenues for future research engaging with subjectivity and disagreement.
[ "Fleisig, Eve", "Blodgett, Su Lin", "Klein, Dan", "Talat, Zeerak" ]
The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels
naacl-long.126
Poster
2405.05860
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.127.bib
https://aclanthology.org/2024.naacl-long.127/
@inproceedings{elangovan-etal-2024-principles, title = "Principles from Clinical Research for {NLP} Model Generalization", author = "Elangovan, Aparna and He, Jiayuan and Li, Yuan and Verspoor, Karin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.127", doi = "10.18653/v1/2024.naacl-long.127", pages = "2293--2309", abstract = "The NLP community typically relies on performance of a model on a held-out test set to assess generalization. Performance drops observed in datasets outside of official test sets are generally attributed to {``}out-of-distribution{''} effects. Here, we explore the foundations of generalizability and study the factors that affect it, articulating lessons from clinical studies. In clinical research, generalizability is an act of reasoning that depends on (a) *internal validity* of experiments to ensure controlled measurement of cause and effect, and (b) *external validity* or transportability of the results to the wider population. We demonstrate how learning spurious correlations, such as the distance between entities in relation extraction tasks, can affect a model{'}s internal validity and in turn adversely impact generalization. We, therefore, present the need to ensure internal validity when building machine learning models in NLP. Our recommendations also apply to generative large language models, as they are known to be sensitive to even minor semantic preserving alterations. We also propose adapting the idea of *matching* in randomized controlled trials and observational studies to NLP evaluation to measure causation.", }
The NLP community typically relies on performance of a model on a held-out test set to assess generalization. Performance drops observed in datasets outside of official test sets are generally attributed to {``}out-of-distribution{''} effects. Here, we explore the foundations of generalizability and study the factors that affect it, articulating lessons from clinical studies. In clinical research, generalizability is an act of reasoning that depends on (a) *internal validity* of experiments to ensure controlled measurement of cause and effect, and (b) *external validity* or transportability of the results to the wider population. We demonstrate how learning spurious correlations, such as the distance between entities in relation extraction tasks, can affect a model{'}s internal validity and in turn adversely impact generalization. We, therefore, present the need to ensure internal validity when building machine learning models in NLP. Our recommendations also apply to generative large language models, as they are known to be sensitive to even minor semantic preserving alterations. We also propose adapting the idea of *matching* in randomized controlled trials and observational studies to NLP evaluation to measure causation.
[ "Elangovan, Aparna", "He, Jiayuan", "Li, Yuan", "Verspoor, Karin" ]
Principles from Clinical Research for NLP Model Generalization
naacl-long.127
Oral
2311.03663
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.128.bib
https://aclanthology.org/2024.naacl-long.128/
@inproceedings{saphra-etal-2024-first, title = "First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models", author = "Saphra, Naomi and Fleisig, Eve and Cho, Kyunghyun and Lopez, Adam", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.128", doi = "10.18653/v1/2024.naacl-long.128", pages = "2310--2326", abstract = "Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs). After such a disruptive change to our understanding of the field, what is left to do? Taking a historical lens, we look for guidance from the first era of LLMs, which began in 2005 with large $n$-gram models for machine translation (MT). We identify durable lessons from the first era, and more importantly, we identify evergreen problems where NLP researchers can continue to make meaningful contributions in areas where LLMs are ascendant. We argue that disparities in scale are transient and researchers can work to reduce them; that data, rather than hardware, is still a bottleneck for many applications; that meaningful realistic evaluation is still an open problem; and that there is still room for speculative approaches.", }
Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs). After such a disruptive change to our understanding of the field, what is left to do? Taking a historical lens, we look for guidance from the first era of LLMs, which began in 2005 with large $n$-gram models for machine translation (MT). We identify durable lessons from the first era, and more importantly, we identify evergreen problems where NLP researchers can continue to make meaningful contributions in areas where LLMs are ascendant. We argue that disparities in scale are transient and researchers can work to reduce them; that data, rather than hardware, is still a bottleneck for many applications; that meaningful realistic evaluation is still an open problem; and that there is still room for speculative approaches.
[ "Saphra, Naomi", "Fleisig, Eve", "Cho, Kyunghyun", "Lopez, Adam" ]
First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models
naacl-long.128
Poster
2311.05020
[ "" ]
https://huggingface.co/papers/2311.05020
2
2
0
4
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.129.bib
https://aclanthology.org/2024.naacl-long.129/
@inproceedings{tang-etal-2024-found, title = "Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models", author = "Tang, Raphael and Zhang, Crystina and Ma, Xueguang and Lin, Jimmy and Ture, Ferhan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.129", doi = "10.18653/v1/2024.naacl-long.129", pages = "2327--2340", abstract = "Large language models (LLMs) exhibit positional bias in how they use context, which especially affects listwise ranking. To address this, we propose permutation self-consistency, a form of self-consistency over the ranking list outputs of black-box LLMs. Our key idea is to marginalize out different list orders in the prompt to produce an order-independent ranking with less positional bias. First, given some input prompt, we repeatedly shuffle the list in the prompt and pass it through the LLM while holding the instructions the same. Next, we aggregate the resulting sample of rankings by computing the central ranking closest in distance to all of them, marginalizing out prompt order biases in the process. Theoretically, we prove the robustness of our method, showing convergence to the true ranking under random perturbations.Empirically, on five datasets in sorting and passage reranking, our approach improves scores from conventional inference by up to 34-52{\%} for Mistral, 7-18{\%} for GPT-3.5, 8-16{\%} for LLaMA v2 (70B). Our code is at https://github.com/castorini/perm-sc.", }
Large language models (LLMs) exhibit positional bias in how they use context, which especially affects listwise ranking. To address this, we propose permutation self-consistency, a form of self-consistency over the ranking list outputs of black-box LLMs. Our key idea is to marginalize out different list orders in the prompt to produce an order-independent ranking with less positional bias. First, given some input prompt, we repeatedly shuffle the list in the prompt and pass it through the LLM while holding the instructions the same. Next, we aggregate the resulting sample of rankings by computing the central ranking closest in distance to all of them, marginalizing out prompt order biases in the process. Theoretically, we prove the robustness of our method, showing convergence to the true ranking under random perturbations.Empirically, on five datasets in sorting and passage reranking, our approach improves scores from conventional inference by up to 34-52{\%} for Mistral, 7-18{\%} for GPT-3.5, 8-16{\%} for LLaMA v2 (70B). Our code is at https://github.com/castorini/perm-sc.
[ "Tang, Raphael", "Zhang, Crystina", "Ma, Xueguang", "Lin, Jimmy", "Ture, Ferhan" ]
Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models
naacl-long.129
Poster
2310.07712
[ "https://github.com/castorini/perm-sc" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.130.bib
https://aclanthology.org/2024.naacl-long.130/
@inproceedings{wu-etal-2024-language, title = "From Language Modeling to Instruction Following: Understanding the Behavior Shift in {LLM}s after Instruction Tuning", author = "Wu, Xuansheng and Yao, Wenlin and Chen, Jianshu and Pan, Xiaoman and Wang, Xiaoyang and Liu, Ninghao and Yu, Dong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.130", doi = "10.18653/v1/2024.naacl-long.130", pages = "2341--2369", abstract = "Large Language Models (LLMs) have achieved remarkable success, where instruction tuning is the critical step in aligning LLMs with user intentions. In this work, we investigate how the instruction tuning adjusts pre-trained models with a focus on intrinsic changes. Specifically, we first develop several local and global explanation methods, including a gradient-based method for input-output attribution, and techniques for interpreting patterns and concepts in self-attention and feed-forward layers. The impact of instruction tuning is then studied by comparing the explanations derived from the pre-trained and instruction-tuned models. This approach provides an internal perspective of the model shifts on a human-comprehensible level. Our findings reveal three significant impacts of instruction tuning: 1) It empowers LLMs to recognize the instruction parts of user prompts, and promotes the response generation constantly conditioned on the instructions. 2) It encourages the self-attention heads to capture more word-word relationships about instruction verbs. 3) It encourages the feed-forward networks to rotate their pre-trained knowledge toward user-oriented tasks. These insights contribute to a more comprehensive understanding of instruction tuning and lay the groundwork for future work that aims at explaining and optimizing LLMs for various applications. Our code and data are publicly available at https://github.com/JacksonWuxs/Interpret{\_}Instruction{\_}Tuning{\_}LLMs.", }
Large Language Models (LLMs) have achieved remarkable success, where instruction tuning is the critical step in aligning LLMs with user intentions. In this work, we investigate how the instruction tuning adjusts pre-trained models with a focus on intrinsic changes. Specifically, we first develop several local and global explanation methods, including a gradient-based method for input-output attribution, and techniques for interpreting patterns and concepts in self-attention and feed-forward layers. The impact of instruction tuning is then studied by comparing the explanations derived from the pre-trained and instruction-tuned models. This approach provides an internal perspective of the model shifts on a human-comprehensible level. Our findings reveal three significant impacts of instruction tuning: 1) It empowers LLMs to recognize the instruction parts of user prompts, and promotes the response generation constantly conditioned on the instructions. 2) It encourages the self-attention heads to capture more word-word relationships about instruction verbs. 3) It encourages the feed-forward networks to rotate their pre-trained knowledge toward user-oriented tasks. These insights contribute to a more comprehensive understanding of instruction tuning and lay the groundwork for future work that aims at explaining and optimizing LLMs for various applications. Our code and data are publicly available at https://github.com/JacksonWuxs/Interpret{\_}Instruction{\_}Tuning{\_}LLMs.
[ "Wu, Xuansheng", "Yao, Wenlin", "Chen, Jianshu", "Pan, Xiaoman", "Wang, Xiaoyang", "Liu, Ninghao", "Yu, Dong" ]
From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning
naacl-long.130
Oral
2310.00492
[ "https://github.com/jacksonwuxs/interpret_instruction_tuning_llms" ]
https://huggingface.co/papers/2310.00492
1
1
0
7
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.131.bib
https://aclanthology.org/2024.naacl-long.131/
@inproceedings{cheung-etal-2024-polyie, title = "{POLYIE}: A Dataset of Information Extraction from Polymer Material Scientific Literature", author = "Cheung, Jerry and Zhuang, Yuchen and Li, Yinghao and Shetty, Pranav and Zhao, Wantian and Grampurohit, Sanjeev and Ramprasad, Rampi and Zhang, Chao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.131", doi = "10.18653/v1/2024.naacl-long.131", pages = "2370--2385", abstract = "Scientific information extraction (SciIE), which aims to automatically extract information from scientific literature, is becoming more important than ever. However, there are no existing SciIE datasets for polymer materials, which is an important class of materials used ubiquitously in our daily lives. To bridge this gap, we introduce POLYIE, a new SciIE dataset for polymer materials. POLYIE is curated from 146 full-length polymer scholarly articles, which are annotated with different named entities (i.e., materials, properties, values, conditions) as well as their N-ary relations by domain experts. POLYIE presents several unique challenges due to diverse lexical formats of entities, ambiguity between entities, and variable-length relations. We evaluate state-of-the-art named entity extraction and relation extraction models on POLYIE, analyze their strengths and weaknesses, and highlight some difficult cases for these models. To the best of our knowledge, POLYIE is the first SciIE benchmark for polymer materials, and we hope it will lead to more research efforts from the community on this challenging task. Our code and data are available on: https://github.com/jerry3027/PolyIE.", }
Scientific information extraction (SciIE), which aims to automatically extract information from scientific literature, is becoming more important than ever. However, there are no existing SciIE datasets for polymer materials, which is an important class of materials used ubiquitously in our daily lives. To bridge this gap, we introduce POLYIE, a new SciIE dataset for polymer materials. POLYIE is curated from 146 full-length polymer scholarly articles, which are annotated with different named entities (i.e., materials, properties, values, conditions) as well as their N-ary relations by domain experts. POLYIE presents several unique challenges due to diverse lexical formats of entities, ambiguity between entities, and variable-length relations. We evaluate state-of-the-art named entity extraction and relation extraction models on POLYIE, analyze their strengths and weaknesses, and highlight some difficult cases for these models. To the best of our knowledge, POLYIE is the first SciIE benchmark for polymer materials, and we hope it will lead to more research efforts from the community on this challenging task. Our code and data are available on: https://github.com/jerry3027/PolyIE.
[ "Cheung, Jerry", "Zhuang, Yuchen", "Li, Yinghao", "Shetty, Pranav", "Zhao, Wantian", "Grampurohit, Sanjeev", "Ramprasad, Rampi", "Zhang, Chao" ]
POLYIE: A Dataset of Information Extraction from Polymer Material Scientific Literature
naacl-long.131
Poster
2311.07715
[ "https://github.com/jerry3027/polyie" ]
https://huggingface.co/papers/2311.07715
0
0
0
8
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.132.bib
https://aclanthology.org/2024.naacl-long.132/
@inproceedings{zhang-etal-2024-llm-based, title = "{LLM}-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination", author = "Zhang, Kai and Kang, Yangyang and Zhao, Fubang and Liu, Xiaozhong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.132", doi = "10.18653/v1/2024.naacl-long.132", pages = "2386--2398", abstract = "Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, medical assistants hold the potential to offer substantial benefits for individuals. However, the exploration of LLM-based personalized medical assistant remains relatively scarce. Typically, patients converse differently based on their background and preferences which necessitates the task of enhancing user-oriented medical assistant. While one can fully train an LLM for this objective, the resource consumption is unaffordable. Prior research has explored memory-based methods to enhance the response with aware of previous mistakes for new queries during a dialogue session. We contend that a mere memory module is inadequate and fully training an LLM can be excessively costly. In this study, we propose a novel computational bionic memory mechanism, equipped with a parameter-efficient fine-tuning (PEFT) schema, to personalize medical assistants. To encourage further research into this area, we are releasing a new conversation dataset generated based on an open-source medical corpus and our implementation.", }
Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, medical assistants hold the potential to offer substantial benefits for individuals. However, the exploration of LLM-based personalized medical assistant remains relatively scarce. Typically, patients converse differently based on their background and preferences which necessitates the task of enhancing user-oriented medical assistant. While one can fully train an LLM for this objective, the resource consumption is unaffordable. Prior research has explored memory-based methods to enhance the response with aware of previous mistakes for new queries during a dialogue session. We contend that a mere memory module is inadequate and fully training an LLM can be excessively costly. In this study, we propose a novel computational bionic memory mechanism, equipped with a parameter-efficient fine-tuning (PEFT) schema, to personalize medical assistants. To encourage further research into this area, we are releasing a new conversation dataset generated based on an open-source medical corpus and our implementation.
[ "Zhang, Kai", "Kang, Yangyang", "Zhao, Fubang", "Liu, Xiaozhong" ]
LLM-based Medical Assistant Personalization with Short- and Long-Term Memory Coordination
naacl-long.132
Oral
2309.11696
[ "https://github.com/matthewkkai/malp" ]
https://huggingface.co/papers/2309.11696
1
0
0
4
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.133.bib
https://aclanthology.org/2024.naacl-long.133/
@inproceedings{parnell-etal-2024-sumtra, title = "{S}um{T}ra: A Differentiable Pipeline for Few-Shot Cross-Lingual Summarization", author = "Parnell, Jacob and Jauregi Unanue, Inigo and Piccardi, Massimo", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.133", doi = "10.18653/v1/2024.naacl-long.133", pages = "2399--2415", abstract = "Cross-lingual summarization (XLS) generates summaries in a language different from that of the input documents (e.g., English to Spanish), allowing speakers of the target language to gain a concise view of their content. In the present day, the predominant approach to this task is to take a performing, pretrained multilingual language model (LM) and fine-tune it for XLS on the language pairs of interest. However, the scarcity of fine-tuning samples makes this approach challenging in some cases. For this reason, in this paper we propose revisiting the summarize-and-translate pipeline, where the summarization and translation tasks are performed in a sequence. This approach allows reusing the many, publicly-available resources for monolingual summarization and translation, obtaining a very competitive zero-shot performance. In addition, the proposed pipeline is completely differentiable end-to-end, allowing it to take advantage of few-shot fine-tuning, where available. Experiments over two contemporary and widely adopted XLS datasets (CrossSum and WikiLingua) have shown the remarkable zero-shot performance of the proposed approach, and also its strong few-shot performance compared to an equivalent multilingual LM baseline, that the proposed approach has been able to outperform in many languages with only 10{\%} of the fine-tuning samples.", }
Cross-lingual summarization (XLS) generates summaries in a language different from that of the input documents (e.g., English to Spanish), allowing speakers of the target language to gain a concise view of their content. In the present day, the predominant approach to this task is to take a performing, pretrained multilingual language model (LM) and fine-tune it for XLS on the language pairs of interest. However, the scarcity of fine-tuning samples makes this approach challenging in some cases. For this reason, in this paper we propose revisiting the summarize-and-translate pipeline, where the summarization and translation tasks are performed in a sequence. This approach allows reusing the many, publicly-available resources for monolingual summarization and translation, obtaining a very competitive zero-shot performance. In addition, the proposed pipeline is completely differentiable end-to-end, allowing it to take advantage of few-shot fine-tuning, where available. Experiments over two contemporary and widely adopted XLS datasets (CrossSum and WikiLingua) have shown the remarkable zero-shot performance of the proposed approach, and also its strong few-shot performance compared to an equivalent multilingual LM baseline, that the proposed approach has been able to outperform in many languages with only 10{\%} of the fine-tuning samples.
[ "Parnell, Jacob", "Jauregi Unanue, Inigo", "Piccardi, Massimo" ]
SumTra: A Differentiable Pipeline for Few-Shot Cross-Lingual Summarization
naacl-long.133
Poster
2403.13240
[ "https://github.com/jacob-parnell-rozetta/sumtra" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.134.bib
https://aclanthology.org/2024.naacl-long.134/
@inproceedings{oh-etal-2024-ktrl, title = "{KTRL}+{F}: Knowledge-Augmented In-Document Search", author = "Oh, Hanseok and Shin, Haebin and Ko, Miyoung and Lee, Hyunji and Seo, Minjoon", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.134", doi = "10.18653/v1/2024.naacl-long.134", pages = "2416--2436", abstract = "We introduce a new problem KTRL+F, a knowledge-augmented in-document search that necessitates real-time identification of all semantic targets within a document with the awareness of external sources through a single natural query. KTRL+F addresses following unique challenges for in-document search: 1) utilizing knowledge outside the document for extended use of additional information about targets, and 2) balancing between real-time applicability with the performance.We analyze various baselines in KTRL+F and find limitations of existing models, such as hallucinations, high latency, or difficulties in leveraging external knowledge. Therefore, we propose a Knowledge-Augmented Phrase Retrieval model that shows a promising balance between speed and performance by simply augmenting external knowledge in phrase embedding. We also conduct a user study to verify whether solving KTRL+F can enhance search experience for users. It demonstrates that even with our simple model, users can reduce the time for searching with less queries and reduced extra visits to other sources for collecting evidence. We encourage the research community to work on KTRL+F to enhance more efficient in-document information access.", }
We introduce a new problem KTRL+F, a knowledge-augmented in-document search that necessitates real-time identification of all semantic targets within a document with the awareness of external sources through a single natural query. KTRL+F addresses following unique challenges for in-document search: 1) utilizing knowledge outside the document for extended use of additional information about targets, and 2) balancing between real-time applicability with the performance.We analyze various baselines in KTRL+F and find limitations of existing models, such as hallucinations, high latency, or difficulties in leveraging external knowledge. Therefore, we propose a Knowledge-Augmented Phrase Retrieval model that shows a promising balance between speed and performance by simply augmenting external knowledge in phrase embedding. We also conduct a user study to verify whether solving KTRL+F can enhance search experience for users. It demonstrates that even with our simple model, users can reduce the time for searching with less queries and reduced extra visits to other sources for collecting evidence. We encourage the research community to work on KTRL+F to enhance more efficient in-document information access.
[ "Oh, Hanseok", "Shin, Haebin", "Ko, Miyoung", "Lee, Hyunji", "Seo, Minjoon" ]
KTRL+F: Knowledge-Augmented In-Document Search
naacl-long.134
Poster
2311.08329
[ "https://github.com/hanseokoh/ktrlf" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.135.bib
https://aclanthology.org/2024.naacl-long.135/
@inproceedings{lee-etal-2024-well, title = "How Well Do Large Language Models Truly Ground?", author = "Lee, Hyunji and Joo, Se June and Kim, Chaeeun and Jang, Joel and Kim, Doyoung and On, Kyoung-Woon and Seo, Minjoon", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.135", doi = "10.18653/v1/2024.naacl-long.135", pages = "2437--2465", abstract = "To reduce issues like hallucinations and lack of control in Large Language Models (LLMs), a common method is to generate responses by grounding on external contexts given as input, known as knowledge-augmented models. However, previous research often narrowly defines {``}grounding{''} as just having the correct answer, which does not ensure the reliability of the entire response. To overcome this, we propose a stricter definition of grounding: a model is truly grounded if it (1) fully utilizes the necessary knowledge from the provided context, and (2) stays within the limits of that knowledge. We introduce a new dataset and a grounding metric to evaluate model capability under the definition. We perform experiments across 25 LLMs of different sizes and training methods and provide insights into factors that influence grounding performance. Our findings contribute to a better understanding of how to improve grounding capabilities and suggest an area of improvement toward more reliable and controllable LLM applications.", }
To reduce issues like hallucinations and lack of control in Large Language Models (LLMs), a common method is to generate responses by grounding on external contexts given as input, known as knowledge-augmented models. However, previous research often narrowly defines {``}grounding{''} as just having the correct answer, which does not ensure the reliability of the entire response. To overcome this, we propose a stricter definition of grounding: a model is truly grounded if it (1) fully utilizes the necessary knowledge from the provided context, and (2) stays within the limits of that knowledge. We introduce a new dataset and a grounding metric to evaluate model capability under the definition. We perform experiments across 25 LLMs of different sizes and training methods and provide insights into factors that influence grounding performance. Our findings contribute to a better understanding of how to improve grounding capabilities and suggest an area of improvement toward more reliable and controllable LLM applications.
[ "Lee, Hyunji", "Joo, Se June", "Kim, Chaeeun", "Jang, Joel", "Kim, Doyoung", "On, Kyoung-Woon", "Seo, Minjoon" ]
How Well Do Large Language Models Truly Ground?
naacl-long.135
Oral
2311.09069
[ "https://github.com/kaistai/how-well-do-llms-truly-ground" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.136.bib
https://aclanthology.org/2024.naacl-long.136/
@inproceedings{varadarajan-etal-2024-alba, title = "{ALBA}: Adaptive Language-Based Assessments for Mental Health", author = {Varadarajan, Vasudha and Sikstr{\"o}m, Sverker and Kjell, Oscar and Schwartz, H.}, editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.136", doi = "10.18653/v1/2024.naacl-long.136", pages = "2466--2478", abstract = "Mental health issues differ widely among individuals, with varied signs and symptoms. Recently, language-based assessments haveshown promise in capturing this diversity, but they require a substantial sample of words per person for accuracy. This work introducesthe task of Adaptive Language-Based Assessment (ALBA), which involves adaptively ordering questions while also scoring an individual{'}s latent psychological trait using limited language responses to previous questions. To this end, we develop adaptive testing methods under two psychometric measurement theories: Classical Test Theory and Item Response Theory.We empirically evaluate ordering and scoring strategies, organizing into two new methods: a semi-supervised item response theory-basedmethod (ALIRT) and a supervised Actor-Critic model. While we found both methods to improve over non-adaptive baselines, We foundALIRT to be the most accurate and scalable, achieving the highest accuracy with fewer questions (e.g., Pearson r {\mbox{$\approx$}} 0.93 after only 3 questions as compared to typically needing at least 7 questions). In general, adaptive language-based assessments of depression and anxiety were able to utilize a smaller sample of language without compromising validity or large computational costs.", }
Mental health issues differ widely among individuals, with varied signs and symptoms. Recently, language-based assessments haveshown promise in capturing this diversity, but they require a substantial sample of words per person for accuracy. This work introducesthe task of Adaptive Language-Based Assessment (ALBA), which involves adaptively ordering questions while also scoring an individual{'}s latent psychological trait using limited language responses to previous questions. To this end, we develop adaptive testing methods under two psychometric measurement theories: Classical Test Theory and Item Response Theory.We empirically evaluate ordering and scoring strategies, organizing into two new methods: a semi-supervised item response theory-basedmethod (ALIRT) and a supervised Actor-Critic model. While we found both methods to improve over non-adaptive baselines, We foundALIRT to be the most accurate and scalable, achieving the highest accuracy with fewer questions (e.g., Pearson r {\mbox{$\approx$}} 0.93 after only 3 questions as compared to typically needing at least 7 questions). In general, adaptive language-based assessments of depression and anxiety were able to utilize a smaller sample of language without compromising validity or large computational costs.
[ "Varadarajan, Vasudha", "Sikstr{\\\"o}m, Sverker", "Kjell, Oscar", "Schwartz, H." ]
ALBA: Adaptive Language-Based Assessments for Mental Health
naacl-long.136
Poster
2311.06467
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.137.bib
https://aclanthology.org/2024.naacl-long.137/
@inproceedings{zhou-etal-2024-freb, title = "{FREB}-{TQA}: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering", author = "Zhou, Wei and Mesgar, Mohsen and Adel, Heike and Friedrich, Annemarie", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.137", doi = "10.18653/v1/2024.naacl-long.137", pages = "2479--2497", abstract = "Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly.", }
Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly.
[ "Zhou, Wei", "Mesgar, Mohsen", "Adel, Heike", "Friedrich, Annemarie" ]
FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering
naacl-long.137
Poster
2404.18585
[ "https://github.com/boschresearch/freb-tqa" ]
https://huggingface.co/papers/2404.18585
0
0
0
4
1
[ "sunatte/txt2sql" ]
[ "LegendNNT/data_instruction" ]
[ "Justinrune/LLaMA-Factory", "smarttang/blingsec" ]
https://aclanthology.org/2024.naacl-long.138.bib
https://aclanthology.org/2024.naacl-long.138/
@inproceedings{jia-etal-2024-mill, title = "{MILL}: Mutual Verification with Large Language Models for Zero-Shot Query Expansion", author = "Jia, Pengyue and Liu, Yiding and Zhao, Xiangyu and Li, Xiaopeng and Hao, Changying and Wang, Shuaiqiang and Yin, Dawei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.138", doi = "10.18653/v1/2024.naacl-long.138", pages = "2498--2518", abstract = "Query expansion, pivotal in search engines, enhances the representation of user information needs with additional terms. While existing methods expand queries using retrieved or generated contextual documents, each approach has notable limitations. Retrieval-based methods often fail to accurately capture search intent, particularly with brief or ambiguous queries. Generation-based methods, utilizing large language models (LLMs), generally lack corpus-specific knowledge and entail high fine-tuning costs. To address these gaps, we propose a novel zero-shot query expansion framework utilizing LLMs for mutual verification. Specifically, we first design a query-query-document generation method, leveraging LLMs{'} zero-shot reasoning ability to produce diverse sub-queries and corresponding documents. Then, a mutual verification process synergizes generated and retrieved documents for optimal expansion. Our proposed method is fully zero-shot, and extensive experiments on three public benchmark datasets are conducted to demonstrate its effectiveness over existing methods. Our code is available online at https://github.com/Applied-Machine-Learning-Lab/MILL to ease reproduction.", }
Query expansion, pivotal in search engines, enhances the representation of user information needs with additional terms. While existing methods expand queries using retrieved or generated contextual documents, each approach has notable limitations. Retrieval-based methods often fail to accurately capture search intent, particularly with brief or ambiguous queries. Generation-based methods, utilizing large language models (LLMs), generally lack corpus-specific knowledge and entail high fine-tuning costs. To address these gaps, we propose a novel zero-shot query expansion framework utilizing LLMs for mutual verification. Specifically, we first design a query-query-document generation method, leveraging LLMs{'} zero-shot reasoning ability to produce diverse sub-queries and corresponding documents. Then, a mutual verification process synergizes generated and retrieved documents for optimal expansion. Our proposed method is fully zero-shot, and extensive experiments on three public benchmark datasets are conducted to demonstrate its effectiveness over existing methods. Our code is available online at https://github.com/Applied-Machine-Learning-Lab/MILL to ease reproduction.
[ "Jia, Pengyue", "Liu, Yiding", "Zhao, Xiangyu", "Li, Xiaopeng", "Hao, Changying", "Wang, Shuaiqiang", "Yin, Dawei" ]
MILL: Mutual Verification with Large Language Models for Zero-Shot Query Expansion
naacl-long.138
Poster
2310.19056
[ "https://github.com/applied-machine-learning-lab/mill" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.139.bib
https://aclanthology.org/2024.naacl-long.139/
@inproceedings{perlitz-etal-2024-efficient, title = "Efficient Benchmarking (of Language Models)", author = "Perlitz, Yotam and Bandel, Elron and Gera, Ariel and Arviv, Ofir and Ein-Dor, Liat and Shnarch, Eyal and Slonim, Noam and Shmueli-Scheuer, Michal and Choshen, Leshem", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.139", doi = "10.18653/v1/2024.naacl-long.139", pages = "2519--2536", abstract = "The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities. Such benchmarks are associated with massive computational costs, extending to thousands of GPU hours per model. However, the efficiency aspect of these evaluation efforts had raised little discussion in the literature.In this work, we present the problem of Efficient Benchmarking, namely, intelligently reducing the computation costs of LM evaluation without compromising reliability. Using the HELM benchmark as a test case, we investigate how different benchmark design choices affect the computation-reliability trade-off. We propose to evaluate the reliability of such decisions, by using a new measure {--} Decision Impact on Reliability, DIoR for short.We find, for example, that a benchmark leader may change by merely removing a low-ranked model from the benchmark, and observe that a correct benchmark ranking can be obtained by considering only a fraction of the evaluation examples.Based on our findings, we outline a set of concrete recommendations for efficient benchmark design and utilization practices. To take a step further, we use our findings to propose an evaluation algorithm, that, when applied to the HELM benchmark, leads to dramatic cost savings with minimal loss of benchmark reliability, often reducing computation by x100 or more.", }
The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities. Such benchmarks are associated with massive computational costs, extending to thousands of GPU hours per model. However, the efficiency aspect of these evaluation efforts had raised little discussion in the literature.In this work, we present the problem of Efficient Benchmarking, namely, intelligently reducing the computation costs of LM evaluation without compromising reliability. Using the HELM benchmark as a test case, we investigate how different benchmark design choices affect the computation-reliability trade-off. We propose to evaluate the reliability of such decisions, by using a new measure {--} Decision Impact on Reliability, DIoR for short.We find, for example, that a benchmark leader may change by merely removing a low-ranked model from the benchmark, and observe that a correct benchmark ranking can be obtained by considering only a fraction of the evaluation examples.Based on our findings, we outline a set of concrete recommendations for efficient benchmark design and utilization practices. To take a step further, we use our findings to propose an evaluation algorithm, that, when applied to the HELM benchmark, leads to dramatic cost savings with minimal loss of benchmark reliability, often reducing computation by x100 or more.
[ "Perlitz, Yotam", "B", "el, Elron", "Gera, Ariel", "Arviv, Ofir", "Ein-Dor, Liat", "Shnarch, Eyal", "Slonim, Noam", "Shmueli-Scheuer, Michal", "Choshen, Leshem" ]
Efficient Benchmarking (of Language Models)
naacl-long.139
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.140.bib
https://aclanthology.org/2024.naacl-long.140/
@inproceedings{arad-etal-2024-refact, title = "{R}e{FACT}: Updating Text-to-Image Models by Editing the Text Encoder", author = "Arad, Dana and Orgad, Hadas and Belinkov, Yonatan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.140", doi = "10.18653/v1/2024.naacl-long.140", pages = "2537--2558", abstract = "Our world is marked by unprecedented technological, global, and socio-political transformations, posing a significant challenge to textto-image generative models. These models encode factual associations within their parameters that can quickly become outdated, diminishing their utility for end-users. To that end, we introduce ReFACT, a novel approach for editing factual associations in text-to-image models without relaying on explicit input from end-users or costly re-training. ReFACT updates the weights of a specific layer in the text encoder, modifying only a tiny portion of the model{'}s parameters and leaving the rest of the model unaffected.We empirically evaluate ReFACT on an existing benchmark, alongside a newly curated dataset.Compared to other methods, ReFACT achieves superior performance in both generalization to related concepts and preservation of unrelated concepts.Furthermore, ReFACT maintains image generation quality, making it a practical tool for updating and correcting factual information in text-to-image models.", }
Our world is marked by unprecedented technological, global, and socio-political transformations, posing a significant challenge to textto-image generative models. These models encode factual associations within their parameters that can quickly become outdated, diminishing their utility for end-users. To that end, we introduce ReFACT, a novel approach for editing factual associations in text-to-image models without relaying on explicit input from end-users or costly re-training. ReFACT updates the weights of a specific layer in the text encoder, modifying only a tiny portion of the model{'}s parameters and leaving the rest of the model unaffected.We empirically evaluate ReFACT on an existing benchmark, alongside a newly curated dataset.Compared to other methods, ReFACT achieves superior performance in both generalization to related concepts and preservation of unrelated concepts.Furthermore, ReFACT maintains image generation quality, making it a practical tool for updating and correcting factual information in text-to-image models.
[ "Arad, Dana", "Orgad, Hadas", "Belinkov, Yonatan" ]
ReFACT: Updating Text-to-Image Models by Editing the Text Encoder
naacl-long.140
Poster
2306.00738
[ "https://github.com/technion-cs-nlp/refact" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.141.bib
https://aclanthology.org/2024.naacl-long.141/
@inproceedings{akavarapu-bhattacharya-2024-likelihood, title = "A Likelihood Ratio Test of Genetic Relationship among Languages", author = "Akavarapu, V.S.D.S.Mahesh and Bhattacharya, Arnab", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.141", doi = "10.18653/v1/2024.naacl-long.141", pages = "2559--2570", abstract = "Lexical resemblances among a group of languages indicate that the languages could be genetically related, i.e., they could have descended from a common ancestral language. However, such resemblances can arise by chance and, hence, need not always imply an underlying genetic relationship. Many tests of significance based on permutation of wordlists and word similarity measures appeared in the past to determine the statistical significance of such relationships. We demonstrate that although existing tests may work well for bilateral comparisons, i.e., on pairs of languages, they are either infeasible by design or are prone to yield false positives when applied to groups of languages or language families. To this end, inspired by molecular phylogenetics, we propose a likelihood ratio test to determine if given languages are related based on the proportion of invariant character sites in the aligned wordlists applied during tree inference. Further, we evaluate some language families and show that the proposed test solves the problem of false positives. Finally, we demonstrate that the test supports the existence of macro language families such as Nostratic and Macro-Mayan.", }
Lexical resemblances among a group of languages indicate that the languages could be genetically related, i.e., they could have descended from a common ancestral language. However, such resemblances can arise by chance and, hence, need not always imply an underlying genetic relationship. Many tests of significance based on permutation of wordlists and word similarity measures appeared in the past to determine the statistical significance of such relationships. We demonstrate that although existing tests may work well for bilateral comparisons, i.e., on pairs of languages, they are either infeasible by design or are prone to yield false positives when applied to groups of languages or language families. To this end, inspired by molecular phylogenetics, we propose a likelihood ratio test to determine if given languages are related based on the proportion of invariant character sites in the aligned wordlists applied during tree inference. Further, we evaluate some language families and show that the proposed test solves the problem of false positives. Finally, we demonstrate that the test supports the existence of macro language families such as Nostratic and Macro-Mayan.
[ "Akavarapu, V.S.D.S.Mahesh", "Bhattacharya, Arnab" ]
A Likelihood Ratio Test of Genetic Relationship among Languages
naacl-long.141
Poster
2404.00284
[ "https://github.com/mahesh-ak/phyloval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.142.bib
https://aclanthology.org/2024.naacl-long.142/
@inproceedings{zhu-etal-2024-pad, title = "{P}a{D}: Program-aided Distillation Can Teach Small Models Reasoning Better than Chain-of-thought Fine-tuning", author = "Zhu, Xuekai and Qi, Biqing and Zhang, Kaiyan and Long, Xinwei and Lin, Zhouhan and Zhou, Bowen", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.142", doi = "10.18653/v1/2024.naacl-long.142", pages = "2571--2597", abstract = "While large language models (LLMs) excel in various natural language processing tasks, their huge size and the inaccessibility of parameters present challenges for practical deployment. Previous studies try to distill task-specific ability from LLMs to smaller models, using data synthesis and chain-of-thought (CoT) fine-tuning. However, synthetic CoT data often contains faulty reasoning, which deteriorates the quality of distillation, especially in reasoning capabilities. In this work, we propose Program-aided Distillation (PaD), which introduces reasoning programs to suppress the errors in distilled data, and thus achieves better distillation quality for reasoning tasks. In PaD, we utilize the reasoning program to substitute the CoT, allowing automated error checking of synthetic data. Further, through error injecting and further training, the small distilling model could iteratively self-refine the reasoning. Moreover, we conduct a step-wise beam search by step-by-step verifying to acquire more exact reasoning chains. We evaluate PaD on arithmetic reasoning, symbolic reasoning, and general ability.Experimental results demonstrate that smaller models using PaD can not only outperform certain LLMs (e.g., LLaMA-1 13B) but also achieve strong improvement over baselines with a significantly smaller scale of parameters and data. The source code is publicly available athttps://github.com/Xuekai-Zhu/pad.", }
While large language models (LLMs) excel in various natural language processing tasks, their huge size and the inaccessibility of parameters present challenges for practical deployment. Previous studies try to distill task-specific ability from LLMs to smaller models, using data synthesis and chain-of-thought (CoT) fine-tuning. However, synthetic CoT data often contains faulty reasoning, which deteriorates the quality of distillation, especially in reasoning capabilities. In this work, we propose Program-aided Distillation (PaD), which introduces reasoning programs to suppress the errors in distilled data, and thus achieves better distillation quality for reasoning tasks. In PaD, we utilize the reasoning program to substitute the CoT, allowing automated error checking of synthetic data. Further, through error injecting and further training, the small distilling model could iteratively self-refine the reasoning. Moreover, we conduct a step-wise beam search by step-by-step verifying to acquire more exact reasoning chains. We evaluate PaD on arithmetic reasoning, symbolic reasoning, and general ability.Experimental results demonstrate that smaller models using PaD can not only outperform certain LLMs (e.g., LLaMA-1 13B) but also achieve strong improvement over baselines with a significantly smaller scale of parameters and data. The source code is publicly available athttps://github.com/Xuekai-Zhu/pad.
[ "Zhu, Xuekai", "Qi, Biqing", "Zhang, Kaiyan", "Long, Xinwei", "Lin, Zhouhan", "Zhou, Bowen" ]
PaD: Program-aided Distillation Can Teach Small Models Reasoning Better than Chain-of-thought Fine-tuning
naacl-long.142
Oral
2305.13888
[ "https://github.com/xuekai-zhu/pad" ]
https://huggingface.co/papers/2305.13888
2
0
0
5
1
[]
[ "xuekai/pad_train" ]
[]
https://aclanthology.org/2024.naacl-long.143.bib
https://aclanthology.org/2024.naacl-long.143/
@inproceedings{ahuja-etal-2024-megaverse, title = "{MEGAVERSE}: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks", author = "Ahuja, Sanchit and Aggarwal, Divyanshu and Gumma, Varun and Watts, Ishaan and Sathe, Ashutosh and Ochieng, Millicent and Hada, Rishav and Jain, Prachi and Ahmed, Mohamed and Bali, Kalika and Sitaram, Sunayana", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.143", doi = "10.18653/v1/2024.naacl-long.143", pages = "2598--2637", abstract = "There has been a surge in LLM evaluation research to understand LLM capabilities and limitations. However, much of this research has been confined to English, leaving LLM building and evaluation for non-English languages relatively unexplored. Several new LLMs have been introduced recently, necessitating their evaluation on non-English languages. This study aims to perform a thorough evaluation of the non-English capabilities of SoTA LLMs (GPT-3.5-Turbo, GPT-4, PaLM2, Gemini-Pro, Mistral, Llama2, and Gemma) by comparing them on the same set of multilingual datasets. Our benchmark comprises 22 datasets covering 83 languages, including low-resource African languages. We also include two multimodal datasets in the benchmark and compare the performance of LLaVA models, GPT-4-Vision and Gemini-Pro-Vision. Our experiments show that larger models such as GPT-4, Gemini-Pro and PaLM2 outperform smaller models on various tasks, notably on low-resource languages, with GPT-4 outperforming PaLM2 and Gemini-Pro on more datasets. We also perform a study on data contamination and find that several models are likely to be contaminated with multilingual evaluation benchmarks, necessitating approaches to detect and handle contamination while assessing the multilingual performance of LLMs.", }
There has been a surge in LLM evaluation research to understand LLM capabilities and limitations. However, much of this research has been confined to English, leaving LLM building and evaluation for non-English languages relatively unexplored. Several new LLMs have been introduced recently, necessitating their evaluation on non-English languages. This study aims to perform a thorough evaluation of the non-English capabilities of SoTA LLMs (GPT-3.5-Turbo, GPT-4, PaLM2, Gemini-Pro, Mistral, Llama2, and Gemma) by comparing them on the same set of multilingual datasets. Our benchmark comprises 22 datasets covering 83 languages, including low-resource African languages. We also include two multimodal datasets in the benchmark and compare the performance of LLaVA models, GPT-4-Vision and Gemini-Pro-Vision. Our experiments show that larger models such as GPT-4, Gemini-Pro and PaLM2 outperform smaller models on various tasks, notably on low-resource languages, with GPT-4 outperforming PaLM2 and Gemini-Pro on more datasets. We also perform a study on data contamination and find that several models are likely to be contaminated with multilingual evaluation benchmarks, necessitating approaches to detect and handle contamination while assessing the multilingual performance of LLMs.
[ "Ahuja, Sanchit", "Aggarwal, Divyanshu", "Gumma, Varun", "Watts, Ishaan", "Sathe, Ashutosh", "Ochieng, Millicent", "Hada, Rishav", "Jain, Prachi", "Ahmed, Mohamed", "Bali, Kalika", "Sitaram, Sunayana" ]
MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks
naacl-long.143
Poster
2311.07463
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.144.bib
https://aclanthology.org/2024.naacl-long.144/
@inproceedings{qiu-etal-2024-unlocking, title = "Unlocking Emergent Modularity in Large Language Models", author = "Qiu, Zihan and Huang, Zeyu and Fu, Jie", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.144", doi = "10.18653/v1/2024.naacl-long.144", pages = "2638--2660", abstract = "Modular Neural Networks (MNNs) demonstrate various advantages over monolithic models.Existing MNNs are generally $\textit{explicit}$: their modular architectures are pre-defined, with individual modules expected to implement distinct functions.Recent works reveal that there exists $\textit{implicit}$ modularity in standard pre-trained transformers, namely $\textit{Emergent Modularity}$.They indicate that such modular structures spontaneously exhibit during the early pre-training phase.Despite the benefits of modularity, most Language Models (LMs) are still treated as monolithic models in the pre-train and fine-tune paradigm, with their emergent modularity locked and underutilized.In this work, focusing on unlocking the emergent modularity in LMs, we showcase that standard LMs could be fine-tuned as their Mixture-of-Expert (MoEs) counterparts without introducing any extra parameters. Such MoEs are derived from emergent modularity and are referred to as Emergent MoEs (EMoE).Our experiments demonstrate that fine-tuning EMoE effectively improves downstream in-domain and out-of-domain generalization compared with vanilla fine-tuning.Our analysis and ablation studies further illustrate that it is robust to various configurations and can scale up to Large Language Models (i.e., Llama2-7B and Llama-30B). Code is available at https://github.com/qiuzh20/EMoE.", }
Modular Neural Networks (MNNs) demonstrate various advantages over monolithic models.Existing MNNs are generally $\textit{explicit}$: their modular architectures are pre-defined, with individual modules expected to implement distinct functions.Recent works reveal that there exists $\textit{implicit}$ modularity in standard pre-trained transformers, namely $\textit{Emergent Modularity}$.They indicate that such modular structures spontaneously exhibit during the early pre-training phase.Despite the benefits of modularity, most Language Models (LMs) are still treated as monolithic models in the pre-train and fine-tune paradigm, with their emergent modularity locked and underutilized.In this work, focusing on unlocking the emergent modularity in LMs, we showcase that standard LMs could be fine-tuned as their Mixture-of-Expert (MoEs) counterparts without introducing any extra parameters. Such MoEs are derived from emergent modularity and are referred to as Emergent MoEs (EMoE).Our experiments demonstrate that fine-tuning EMoE effectively improves downstream in-domain and out-of-domain generalization compared with vanilla fine-tuning.Our analysis and ablation studies further illustrate that it is robust to various configurations and can scale up to Large Language Models (i.e., Llama2-7B and Llama-30B). Code is available at https://github.com/qiuzh20/EMoE.
[ "Qiu, Zihan", "Huang, Zeyu", "Fu, Jie" ]
Unlocking Emergent Modularity in Large Language Models
naacl-long.144
Poster
2310.10908
[ "https://github.com/qiuzh20/emoe" ]
https://huggingface.co/papers/2310.10908
2
1
0
3
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.145.bib
https://aclanthology.org/2024.naacl-long.145/
@inproceedings{stahl-etal-2024-school, title = "A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality", author = "Stahl, Maja and Michel, Nadine and Kilsbach, Sebastian and Schmidtke, Julian and Rezat, Sara and Wachsmuth, Henning", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.145", doi = "10.18653/v1/2024.naacl-long.145", pages = "2661--2674", abstract = "Learning argumentative writing is challenging. Besides writing fundamentals such as syntax and grammar, learners must select and arrange argument components meaningfully to create high-quality essays. To support argumentative writing computationally, one step is to mine the argumentative structure. When combined with automatic essay scoring, interactions of the argumentative structure and quality scores can be exploited for comprehensive writing support. Although studies have shown the usefulness of using information about the argumentative structure for essay scoring, no argument mining corpus with ground-truth essay quality annotations has been published yet. Moreover, none of the existing corpora contain essays written by school students specifically. To fill this research gap, we present a German corpus of 1,320 essays from school students of two age groups. Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity. We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks, thereby laying the ground for quality-oriented argumentative writing support.", }
Learning argumentative writing is challenging. Besides writing fundamentals such as syntax and grammar, learners must select and arrange argument components meaningfully to create high-quality essays. To support argumentative writing computationally, one step is to mine the argumentative structure. When combined with automatic essay scoring, interactions of the argumentative structure and quality scores can be exploited for comprehensive writing support. Although studies have shown the usefulness of using information about the argumentative structure for essay scoring, no argument mining corpus with ground-truth essay quality annotations has been published yet. Moreover, none of the existing corpora contain essays written by school students specifically. To fill this research gap, we present a German corpus of 1,320 essays from school students of two age groups. Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity. We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks, thereby laying the ground for quality-oriented argumentative writing support.
[ "Stahl, Maja", "Michel, Nadine", "Kilsbach, Sebastian", "Schmidtke, Julian", "Rezat, Sara", "Wachsmuth, Henning" ]
A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality
naacl-long.145
Poster
2404.02529
[ "https://github.com/webis-de/naacl-24" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.146.bib
https://aclanthology.org/2024.naacl-long.146/
@inproceedings{erk-apidianaki-2024-adjusting, title = "Adjusting Interpretable Dimensions in Embedding Space with Human Judgments", author = "Erk, Katrin and Apidianaki, Marianna", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.146", doi = "10.18653/v1/2024.naacl-long.146", pages = "2675--2686", abstract = "Embedding spaces contain interpretable dimensions indicating gender, formality in style, or even object properties. This has been observed multiple times. Such interpretable dimensions are becoming valuable tools in different areas of study, from social science to neuroscience. The standard way to compute these dimensions uses contrasting seed words and computes difference vectors over them. This is simple but does not always work well. We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity. We obtain interpretable dimensions with markedly better performance especially in cases where seed-based dimensions do not work well.", }
Embedding spaces contain interpretable dimensions indicating gender, formality in style, or even object properties. This has been observed multiple times. Such interpretable dimensions are becoming valuable tools in different areas of study, from social science to neuroscience. The standard way to compute these dimensions uses contrasting seed words and computes difference vectors over them. This is simple but does not always work well. We combine seed-based vectors with guidance from human ratings of where words fall along a specific dimension, and evaluate on predicting both object properties like size and danger, and the stylistic properties of formality and complexity. We obtain interpretable dimensions with markedly better performance especially in cases where seed-based dimensions do not work well.
[ "Erk, Katrin", "Apidianaki, Marianna" ]
Adjusting Interpretable Dimensions in Embedding Space with Human Judgments
naacl-long.146
Poster
2404.02619
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.147.bib
https://aclanthology.org/2024.naacl-long.147/
@inproceedings{zuo-etal-2024-patenteval, title = "{P}atent{E}val: Understanding Errors in Patent Generation", author = "Zuo, You and Gerdes, Kim and Clergerie, {\'E}ric and Sagot, Beno{\^\i}t", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.147", doi = "10.18653/v1/2024.naacl-long.147", pages = "2687--2710", abstract = "In this work, we introduce a comprehensive error typology specifically designed for evaluating two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones. We have also developed a benchmark, PatentEval, for systematically assessing language models in this context. Our study includes a comparative analysis, annotated by humans, of various models. These range from those specifically adapted during training for tasks within the patent domain to the latest general-purpose large language models (LLMs). Furthermore, we explored and evaluated some metrics to approximate human judgments in patent text evaluation, analyzing the extent to which these metrics align with expert assessments. These approaches provide valuable insights into the capabilities and limitations of current language models in the specialized field of patent text generation.", }
In this work, we introduce a comprehensive error typology specifically designed for evaluating two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones. We have also developed a benchmark, PatentEval, for systematically assessing language models in this context. Our study includes a comparative analysis, annotated by humans, of various models. These range from those specifically adapted during training for tasks within the patent domain to the latest general-purpose large language models (LLMs). Furthermore, we explored and evaluated some metrics to approximate human judgments in patent text evaluation, analyzing the extent to which these metrics align with expert assessments. These approaches provide valuable insights into the capabilities and limitations of current language models in the specialized field of patent text generation.
[ "Zuo, You", "Gerdes, Kim", "Clergerie, {\\'E}ric", "Sagot, Beno{\\^\\i}t" ]
PatentEval: Understanding Errors in Patent Generation
naacl-long.147
Poster
2406.06589
[ "https://github.com/zoeyou/patenteval" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.148.bib
https://aclanthology.org/2024.naacl-long.148/
@inproceedings{koneru-etal-2024-contextual, title = "Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing", author = "Koneru, Sai and Exel, Miriam and Huck, Matthias and Niehues, Jan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.148", doi = "10.18653/v1/2024.naacl-long.148", pages = "2711--2725", abstract = "Large language models (LLMs) have demonstrated considerable success in various natural language processing tasks, but open-source LLMs have yet to attain state-of-the-art performance in Neural Machine Translation (NMT). Nevertheless, their significant performance in tasks demanding a broad understanding and contextual processing shows their potential for translation. To exploit these abilities, we investigate using LLMs for MT and explore recent parameter-efficient fine-tuning techniques. Surprisingly, our initial experiments found that fine-tuning with Q-LoRA for translation purposes led to performance improvements in terms of BLEU but degradation in COMET compared to in-context learning. To overcome this, we propose an alternative approach: adapting LLMs as Automatic Post-Editors (APE) rather than direct translators. Building on the ability of the LLM to handle long sequences, we also propose extending our approach to document-level translation. We show that leveraging Low-Rank-Adapter fine-tuning for APE can yield significant improvements across both sentence and document-level metrics while generalizing to out-of-domain data. Most notably, we achieve a state-of-the-art accuracy rate of 88.7{\%} on the ContraPro test set, which assesses the model{'}s ability to resolve pronoun ambiguities when translating from English to German. Lastly, during manual post-editing for document-level translation, the source sentences are iteratively annotated, which can be used to refine further translations in the document. Here, we demonstrate that leveraging human corrections can significantly reduce the number of edits required for subsequent translations.", }
Large language models (LLMs) have demonstrated considerable success in various natural language processing tasks, but open-source LLMs have yet to attain state-of-the-art performance in Neural Machine Translation (NMT). Nevertheless, their significant performance in tasks demanding a broad understanding and contextual processing shows their potential for translation. To exploit these abilities, we investigate using LLMs for MT and explore recent parameter-efficient fine-tuning techniques. Surprisingly, our initial experiments found that fine-tuning with Q-LoRA for translation purposes led to performance improvements in terms of BLEU but degradation in COMET compared to in-context learning. To overcome this, we propose an alternative approach: adapting LLMs as Automatic Post-Editors (APE) rather than direct translators. Building on the ability of the LLM to handle long sequences, we also propose extending our approach to document-level translation. We show that leveraging Low-Rank-Adapter fine-tuning for APE can yield significant improvements across both sentence and document-level metrics while generalizing to out-of-domain data. Most notably, we achieve a state-of-the-art accuracy rate of 88.7{\%} on the ContraPro test set, which assesses the model{'}s ability to resolve pronoun ambiguities when translating from English to German. Lastly, during manual post-editing for document-level translation, the source sentences are iteratively annotated, which can be used to refine further translations in the document. Here, we demonstrate that leveraging human corrections can significantly reduce the number of edits required for subsequent translations.
[ "Koneru, Sai", "Exel, Miriam", "Huck, Matthias", "Niehues, Jan" ]
Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing
naacl-long.148
Poster
2310.14855
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.149.bib
https://aclanthology.org/2024.naacl-long.149/
@inproceedings{jia-li-2024-metaphor, title = "Metaphor Detection with Context Enhancement and Curriculum Learning", author = "Jia, Kaidi and Li, Rongsheng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.149", doi = "10.18653/v1/2024.naacl-long.149", pages = "2726--2737", abstract = "Metaphor detection is a challenging task for natural language processing (NLP) systems. Previous works failed to sufficiently utilize the internal and external semantic relationships between target words and their context. Furthermore, they have faced challenges in tackling the problem of data sparseness due to the very limited available training data. To address these two challenges, we propose a novel model called MiceCL. By leveraging the difference between the literal meaning of the target word and the meaning of the sentence as the sentence external difference, MiceCL can better handle the semantic relationships. Additionally, we propose a curriculum learning framework for automatically assessing difficulty of the sentence with a pre-trained model. By starting from easy examples and gradually progressing to more difficult ones, we can ensure that the model will not deal with complex data when its ability is weak so that to avoid wasting limited data. Experimental results demonstrate that MiceCL achieves competitive performance across multiple datasets, with a significantly improved convergence speed compared to other models.", }
Metaphor detection is a challenging task for natural language processing (NLP) systems. Previous works failed to sufficiently utilize the internal and external semantic relationships between target words and their context. Furthermore, they have faced challenges in tackling the problem of data sparseness due to the very limited available training data. To address these two challenges, we propose a novel model called MiceCL. By leveraging the difference between the literal meaning of the target word and the meaning of the sentence as the sentence external difference, MiceCL can better handle the semantic relationships. Additionally, we propose a curriculum learning framework for automatically assessing difficulty of the sentence with a pre-trained model. By starting from easy examples and gradually progressing to more difficult ones, we can ensure that the model will not deal with complex data when its ability is weak so that to avoid wasting limited data. Experimental results demonstrate that MiceCL achieves competitive performance across multiple datasets, with a significantly improved convergence speed compared to other models.
[ "Jia, Kaidi", "Li, Rongsheng" ]
Metaphor Detection with Context Enhancement and Curriculum Learning
naacl-long.149
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.150.bib
https://aclanthology.org/2024.naacl-long.150/
@inproceedings{liu-etal-2024-causes, title = "What Causes the Failure of Explicit to Implicit Discourse Relation Recognition?", author = "Liu, Wei and Wan, Stephen and Strube, Michael", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.150", doi = "10.18653/v1/2024.naacl-long.150", pages = "2738--2753", abstract = "We consider an unanswered question in the discourse processing community: why do relation classifiers trained on explicit examples (with connectives removed) perform poorly in real implicit scenarios? Prior work claimed this is due to linguistic dissimilarity between explicit and implicit examples but provided no empirical evidence. In this study, we show that one cause for such failure is a label shift after connectives are eliminated. Specifically, we find that the discourse relations expressed by some explicit instances will change when connectives disappear. Unlike previous work manually analyzing a few examples, we present empirical evidence at the corpus level to prove the existence of such shift. Then, we analyze why label shift occurs by considering factors such as the syntactic role played by connectives, ambiguity of connectives, and more. Finally, we investigate two strategies to mitigate the label shift: filtering out noisy data and joint learning with connectives. Experiments on PDTB 2.0, PDTB 3.0, and the GUM dataset demonstrate that classifiers trained with our strategies outperform strong baselines.", }
We consider an unanswered question in the discourse processing community: why do relation classifiers trained on explicit examples (with connectives removed) perform poorly in real implicit scenarios? Prior work claimed this is due to linguistic dissimilarity between explicit and implicit examples but provided no empirical evidence. In this study, we show that one cause for such failure is a label shift after connectives are eliminated. Specifically, we find that the discourse relations expressed by some explicit instances will change when connectives disappear. Unlike previous work manually analyzing a few examples, we present empirical evidence at the corpus level to prove the existence of such shift. Then, we analyze why label shift occurs by considering factors such as the syntactic role played by connectives, ambiguity of connectives, and more. Finally, we investigate two strategies to mitigate the label shift: filtering out noisy data and joint learning with connectives. Experiments on PDTB 2.0, PDTB 3.0, and the GUM dataset demonstrate that classifiers trained with our strategies outperform strong baselines.
[ "Liu, Wei", "Wan, Stephen", "Strube, Michael" ]
What Causes the Failure of Explicit to Implicit Discourse Relation Recognition?
naacl-long.150
Oral
2404.00999
[ "https://github.com/liuwei1206/exp2imp" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.151.bib
https://aclanthology.org/2024.naacl-long.151/
@inproceedings{arora-etal-2024-universlu, title = "{U}niver{SLU}: Universal Spoken Language Understanding for Diverse Tasks with Natural Language Instructions", author = "Arora, Siddhant and Futami, Hayato and Jung, Jee-weon and Peng, Yifan and Sharma, Roshan and Kashiwagi, Yosuke and Tsunoo, Emiru and Livescu, Karen and Watanabe, Shinji", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.151", doi = "10.18653/v1/2024.naacl-long.151", pages = "2754--2774", abstract = "Recent studies leverage large language models with multi-tasking capabilities, using natural language prompts to guide the model{'}s behavior and surpassing performance of task-specific models. Motivated by this, we ask: can we build a single model that jointly performs various spoken language understanding (SLU) tasks? We start by adapting a pre-trained automatic speech recognition model to additional tasks using single-token task specifiers. We enhance this approach through instruction tuning, i.e., finetuning by describing the task using natural language instructions followed by the list of label options. Our approach can generalize to new task descriptions for the seen tasks during inference, thereby enhancing its user-friendliness. We demonstrate the efficacy of our single multi-task learning model {``}UniverSLU{''} for 12 speech classification and sequence generation task types spanning 17 datasets and 9 languages. On most tasks, UniverSLU achieves competitive performance and often even surpasses task-specific models. Additionally, we assess the zero-shot capabilities, finding that the model generalizes to new datasets and languages for seen task types.", }
Recent studies leverage large language models with multi-tasking capabilities, using natural language prompts to guide the model{'}s behavior and surpassing performance of task-specific models. Motivated by this, we ask: can we build a single model that jointly performs various spoken language understanding (SLU) tasks? We start by adapting a pre-trained automatic speech recognition model to additional tasks using single-token task specifiers. We enhance this approach through instruction tuning, i.e., finetuning by describing the task using natural language instructions followed by the list of label options. Our approach can generalize to new task descriptions for the seen tasks during inference, thereby enhancing its user-friendliness. We demonstrate the efficacy of our single multi-task learning model {``}UniverSLU{''} for 12 speech classification and sequence generation task types spanning 17 datasets and 9 languages. On most tasks, UniverSLU achieves competitive performance and often even surpasses task-specific models. Additionally, we assess the zero-shot capabilities, finding that the model generalizes to new datasets and languages for seen task types.
[ "Arora, Siddhant", "Futami, Hayato", "Jung, Jee-weon", "Peng, Yifan", "Sharma, Roshan", "Kashiwagi, Yosuke", "Tsunoo, Emiru", "Livescu, Karen", "Watanabe, Shinji" ]
UniverSLU: Universal Spoken Language Understanding for Diverse Tasks with Natural Language Instructions
naacl-long.151
Oral
2310.02973
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.152.bib
https://aclanthology.org/2024.naacl-long.152/
@inproceedings{mo-etal-2024-trustworthy, title = "How Trustworthy are Open-Source {LLM}s? An Assessment under Malicious Demonstrations Shows their Vulnerabilities", author = "Mo, Lingbo and Wang, Boshi and Chen, Muhao and Sun, Huan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.152", doi = "10.18653/v1/2024.naacl-long.152", pages = "2775--2792", abstract = "The rapid progress in open-source Large Language Models (LLMs) is significantly driving AI development forward. However, there is still a limited understanding of their trustworthiness. Deploying these models at scale without sufficient trustworthiness can pose significant risks, highlighting the need to uncover these issues promptly. In this work, we conduct an adversarial assessment of open-source LLMs on trustworthiness, scrutinizing them across eight different aspects including toxicity, stereotypes, ethics, hallucination, fairness, sycophancy, privacy, and robustness against adversarial demonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU) prompting strategy by incorporating carefully crafted malicious demonstrations for trustworthiness attack. Our extensive experiments encompass recent and representative series of open-source LLMs, including Vicuna, MPT, Falcon, Mistral, and Llama 2. The empirical outcomes underscore the efficacy of our attack strategy across diverse aspects. More interestingly, our result analysis reveals that models with superior performance in general NLP tasks do not always have greater trustworthiness; in fact, larger models can be more vulnerable to attacks. Additionally, models that have undergone instruction tuning, focusing on instruction following, tend to be more susceptible, although fine-tuning LLMs for safety alignment proves effective in mitigating adversarial trustworthiness attacks.", }
The rapid progress in open-source Large Language Models (LLMs) is significantly driving AI development forward. However, there is still a limited understanding of their trustworthiness. Deploying these models at scale without sufficient trustworthiness can pose significant risks, highlighting the need to uncover these issues promptly. In this work, we conduct an adversarial assessment of open-source LLMs on trustworthiness, scrutinizing them across eight different aspects including toxicity, stereotypes, ethics, hallucination, fairness, sycophancy, privacy, and robustness against adversarial demonstrations. We propose advCoU, an extended Chain of Utterances-based (CoU) prompting strategy by incorporating carefully crafted malicious demonstrations for trustworthiness attack. Our extensive experiments encompass recent and representative series of open-source LLMs, including Vicuna, MPT, Falcon, Mistral, and Llama 2. The empirical outcomes underscore the efficacy of our attack strategy across diverse aspects. More interestingly, our result analysis reveals that models with superior performance in general NLP tasks do not always have greater trustworthiness; in fact, larger models can be more vulnerable to attacks. Additionally, models that have undergone instruction tuning, focusing on instruction following, tend to be more susceptible, although fine-tuning LLMs for safety alignment proves effective in mitigating adversarial trustworthiness attacks.
[ "Mo, Lingbo", "Wang, Boshi", "Chen, Muhao", "Sun, Huan" ]
How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
naacl-long.152
Poster
2311.09447
[ "https://github.com/osu-nlp-group/eval-llm-trust" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.153.bib
https://aclanthology.org/2024.naacl-long.153/
@inproceedings{zhou-etal-2024-paraphrase, title = "Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language Models", author = "Zhou, Yue and Zhu, Yada and Antognini, Diego and Kim, Yoon and Zhang, Yang", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.153", doi = "10.18653/v1/2024.naacl-long.153", pages = "2793--2804", abstract = "This paper studies the relationship between the surface form of a mathematical problem and its solvability by large language models. We find that subtle alterations in the surface form can significantly impact the answer distribution and the solve rate, exposing the language model{'}s lack of robustness and sensitivity to the surface form in reasoning through complex problems. To improve mathematical reasoning performance, we propose Self-Consistency-over-Paraphrases (SCoP), which diversifies reasoning paths from specific surface forms of the problem. We evaluate our approach on four mathematics reasoning benchmarks over three large language models and show that SCoP improves mathematical reasoning performance over vanilla self-consistency, particularly for problems initially deemed unsolvable. Finally, we provide additional experiments and discussion regarding problem difficulty and surface forms, including cross-model difficulty agreement and paraphrasing transferability, and Variance of Variations (VOV) for language model evaluation.", }
This paper studies the relationship between the surface form of a mathematical problem and its solvability by large language models. We find that subtle alterations in the surface form can significantly impact the answer distribution and the solve rate, exposing the language model{'}s lack of robustness and sensitivity to the surface form in reasoning through complex problems. To improve mathematical reasoning performance, we propose Self-Consistency-over-Paraphrases (SCoP), which diversifies reasoning paths from specific surface forms of the problem. We evaluate our approach on four mathematics reasoning benchmarks over three large language models and show that SCoP improves mathematical reasoning performance over vanilla self-consistency, particularly for problems initially deemed unsolvable. Finally, we provide additional experiments and discussion regarding problem difficulty and surface forms, including cross-model difficulty agreement and paraphrasing transferability, and Variance of Variations (VOV) for language model evaluation.
[ "Zhou, Yue", "Zhu, Yada", "Antognini, Diego", "Kim, Yoon", "Zhang, Yang" ]
Paraphrase and Solve: Exploring and Exploiting the Impact of Surface Form on Mathematical Reasoning in Large Language Models
naacl-long.153
Poster
2404.11500
[ "https://github.com/yue-llm-pit/scop" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.154.bib
https://aclanthology.org/2024.naacl-long.154/
@inproceedings{jiang-etal-2024-trisum, title = "{T}ri{S}um: Learning Summarization Ability from Large Language Models with Structured Rationale", author = "Jiang, Pengcheng and Xiao, Cao and Wang, Zifeng and Bhatia, Parminder and Sun, Jimeng and Han, Jiawei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.154", doi = "10.18653/v1/2024.naacl-long.154", pages = "2805--2819", abstract = "The advent of large language models (LLMs) has significantly advanced natural language processing tasks like text summarization. However, their large size and computational demands, coupled with privacy concerns in data transmission, limit their use in resource-constrained and privacy-centric settings. To overcome this, we introduce TriSum, a framework for distilling LLMs{'} text summarization abilities into a compact, local model. Initially, LLMs extract a set of aspect-triple rationales and summaries, which are refined using a dual-scoring method for quality. Next, a smaller local model is trained with these tasks, employing a curriculum learning strategy that evolves from simple to complex tasks. Our method enhances local model performance on various benchmarks (CNN/DailyMail, XSum, and ClinicalTrial), outperforming baselines by 4.5{\%}, 8.5{\%}, and 7.4{\%}, respectively. It also improves interpretability by providing insights into the summarization rationale.", }
The advent of large language models (LLMs) has significantly advanced natural language processing tasks like text summarization. However, their large size and computational demands, coupled with privacy concerns in data transmission, limit their use in resource-constrained and privacy-centric settings. To overcome this, we introduce TriSum, a framework for distilling LLMs{'} text summarization abilities into a compact, local model. Initially, LLMs extract a set of aspect-triple rationales and summaries, which are refined using a dual-scoring method for quality. Next, a smaller local model is trained with these tasks, employing a curriculum learning strategy that evolves from simple to complex tasks. Our method enhances local model performance on various benchmarks (CNN/DailyMail, XSum, and ClinicalTrial), outperforming baselines by 4.5{\%}, 8.5{\%}, and 7.4{\%}, respectively. It also improves interpretability by providing insights into the summarization rationale.
[ "Jiang, Pengcheng", "Xiao, Cao", "Wang, Zifeng", "Bhatia, Parminder", "Sun, Jimeng", "Han, Jiawei" ]
TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale
naacl-long.154
Poster
2403.10351
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.155.bib
https://aclanthology.org/2024.naacl-long.155/
@inproceedings{jiang-etal-2024-genres, title = "{G}en{RES}: Rethinking Evaluation for Generative Relation Extraction in the Era of Large Language Models", author = "Jiang, Pengcheng and Lin, Jiacheng and Wang, Zifeng and Sun, Jimeng and Han, Jiawei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.155", doi = "10.18653/v1/2024.naacl-long.155", pages = "2820--2837", abstract = "The field of relation extraction (RE) is experiencing a notable shift towards generative relation extraction (GRE), leveraging the capabilities of large language models (LLMs). However, we discovered that traditional relation extraction (RE) metrics like precision and recall fall short in evaluating GRE methods. This shortfall arises because these metrics rely on exact matching with human-annotated reference relations, while GRE methods often produce diverse and semantically accurate relations that differ from the references. To fill this gap, we introduce GenRES for a multi-dimensional assessment in terms of the topic similarity, uniqueness, granularity, factualness, and completeness of the GRE results. With GenRES, we empirically identified that (1) precision/recall fails to justify the performance of GRE methods; (2) human-annotated referential relations can be incomplete; (3) prompting LLMs with a fixed set of relations or entities can cause hallucinations. Next, we conducted a human evaluation of GRE methods that shows GenRES is consistent with human preferences for RE quality. Last, we made a comprehensive evaluation of fourteen leading LLMs using GenRES across document, bag, and sentence level RE datasets, respectively, to set the benchmark for future research in GRE", }
The field of relation extraction (RE) is experiencing a notable shift towards generative relation extraction (GRE), leveraging the capabilities of large language models (LLMs). However, we discovered that traditional relation extraction (RE) metrics like precision and recall fall short in evaluating GRE methods. This shortfall arises because these metrics rely on exact matching with human-annotated reference relations, while GRE methods often produce diverse and semantically accurate relations that differ from the references. To fill this gap, we introduce GenRES for a multi-dimensional assessment in terms of the topic similarity, uniqueness, granularity, factualness, and completeness of the GRE results. With GenRES, we empirically identified that (1) precision/recall fails to justify the performance of GRE methods; (2) human-annotated referential relations can be incomplete; (3) prompting LLMs with a fixed set of relations or entities can cause hallucinations. Next, we conducted a human evaluation of GRE methods that shows GenRES is consistent with human preferences for RE quality. Last, we made a comprehensive evaluation of fourteen leading LLMs using GenRES across document, bag, and sentence level RE datasets, respectively, to set the benchmark for future research in GRE
[ "Jiang, Pengcheng", "Lin, Jiacheng", "Wang, Zifeng", "Sun, Jimeng", "Han, Jiawei" ]
GenRES: Rethinking Evaluation for Generative Relation Extraction in the Era of Large Language Models
naacl-long.155
Poster
2402.10744
[ "https://github.com/pat-jj/genres" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.156.bib
https://aclanthology.org/2024.naacl-long.156/
@inproceedings{lou-etal-2024-curated, title = "Curated Datasets and Neural Models for Machine Translation of Informal Registers between {M}ayan and {S}panish Vernaculars", author = "Lou, Andr{\'e}s and P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Mart{\'\i}nez, Felipe and S{\'a}nchez-Cartagena, V{\'\i}ctor", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.156", doi = "10.18653/v1/2024.naacl-long.156", pages = "2838--2850", abstract = "The Mayan languages comprise a language family with an ancient history, millions of speakers, and immense cultural value, that, nevertheless, remains severely underrepresented in terms of resources and global exposure. In this paper we develop, curate, and publicly release a set of corpora in several Mayan languages spoken in Guatemala and Southern Mexico, which we call MayanV. The datasets are parallel with Spanish, the dominant language of the region, and are taken from official native sources focused on representing informal, day-to-day, and non-domain-specific language. As such, and according to our dialectometric analysis, they differ in register from most other available resources. Additionally, we present neural machine translation models, trained on as many resources and Mayan languages as possible, and evaluated exclusively on our datasets. We observe lexical divergences between the dialects of Spanish in our resources and the more widespread written standard of Spanish, and that resources other than the ones we present do not seem to improve translation performance, indicating that many such resources may not accurately capture common, real-life language usage. The MayanV dataset is available at https://github.com/transducens/mayanv.", }
The Mayan languages comprise a language family with an ancient history, millions of speakers, and immense cultural value, that, nevertheless, remains severely underrepresented in terms of resources and global exposure. In this paper we develop, curate, and publicly release a set of corpora in several Mayan languages spoken in Guatemala and Southern Mexico, which we call MayanV. The datasets are parallel with Spanish, the dominant language of the region, and are taken from official native sources focused on representing informal, day-to-day, and non-domain-specific language. As such, and according to our dialectometric analysis, they differ in register from most other available resources. Additionally, we present neural machine translation models, trained on as many resources and Mayan languages as possible, and evaluated exclusively on our datasets. We observe lexical divergences between the dialects of Spanish in our resources and the more widespread written standard of Spanish, and that resources other than the ones we present do not seem to improve translation performance, indicating that many such resources may not accurately capture common, real-life language usage. The MayanV dataset is available at https://github.com/transducens/mayanv.
[ "Lou, Andr{\\'e}s", "P{\\'e}rez-Ortiz, Juan Antonio", "S{\\'a}nchez-Mart{\\'\\i}nez, Felipe", "S{\\'a}nchez-Cartagena, V{\\'\\i}ctor" ]
Curated Datasets and Neural Models for Machine Translation of Informal Registers between Mayan and Spanish Vernaculars
naacl-long.156
Poster
2404.07673
[ "https://github.com/transducens/mayanv" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.157.bib
https://aclanthology.org/2024.naacl-long.157/
@inproceedings{liu-dorr-2024-effect, title = "The Effect of Data Partitioning Strategy on Model Generalizability: A Case Study of Morphological Segmentation", author = "Liu, Zoey and Dorr, Bonnie", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.157", doi = "10.18653/v1/2024.naacl-long.157", pages = "2851--2864", abstract = "Recent work to enhance data partitioning strategies for more realistic model evaluation face challenges in providing a clear optimal choice. This study addresses these challenges, focusing on morphological segmentation and synthesizing limitations related to language diversity, adoption of multiple datasets and splits, and detailed model comparisons. Our study leverages data from 19 languages, including ten indigenous or endangered languages across 10 language families with diverse morphological systems (polysynthetic, fusional, and agglutinative) and different degrees of data availability. We conduct large-scale experimentation with varying sized combinations of training and evaluation sets as well as new test data. Our results show that, when faced with new test data: (1) models trained from random splits are able to achieve higher numerical scores; (2) model rankings derived from random splits tend to generalize more consistently.", }
Recent work to enhance data partitioning strategies for more realistic model evaluation face challenges in providing a clear optimal choice. This study addresses these challenges, focusing on morphological segmentation and synthesizing limitations related to language diversity, adoption of multiple datasets and splits, and detailed model comparisons. Our study leverages data from 19 languages, including ten indigenous or endangered languages across 10 language families with diverse morphological systems (polysynthetic, fusional, and agglutinative) and different degrees of data availability. We conduct large-scale experimentation with varying sized combinations of training and evaluation sets as well as new test data. Our results show that, when faced with new test data: (1) models trained from random splits are able to achieve higher numerical scores; (2) model rankings derived from random splits tend to generalize more consistently.
[ "Liu, Zoey", "Dorr, Bonnie" ]
The Effect of Data Partitioning Strategy on Model Generalizability: A Case Study of Morphological Segmentation
naacl-long.157
Poster
2404.09371
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.158.bib
https://aclanthology.org/2024.naacl-long.158/
@inproceedings{bhattacharya-etal-2024-measuring, title = "Measuring Entrainment in Spontaneous Code-switched Speech", author = "Bhattacharya, Debasmita and Ding, Siying and Nguyen, Alayna and Hirschberg, Julia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.158", doi = "10.18653/v1/2024.naacl-long.158", pages = "2865--2876", abstract = "It is well-known that speakers who entrain to one another have more successful conversations than those who do not. Previous research has shown that interlocutors entrain on linguistic features in both written and spoken $\emph{monolingual}$ domains. More recent work on $\emph{code-switched}$ communication has also shown preliminary evidence of entrainment on certain aspects of code-switching (CSW). However, such studies of entrainment in code-switched domains have been extremely few and restricted to human-machine textual interactions. Our work studies code-switched spontaneous speech between humans, finding that (1) patterns of written and spoken entrainment in monolingual settings largely generalize to code-switched settings, and (2) some patterns of entrainment on code-switching in dialogue agent-generated text generalize to spontaneous code-switched speech. Our findings give rise to important implications for the potentially {``}universal{''} nature of entrainment as a communication phenomenon, and potential applications in inclusive and interactive speech technology.", }
It is well-known that speakers who entrain to one another have more successful conversations than those who do not. Previous research has shown that interlocutors entrain on linguistic features in both written and spoken $\emph{monolingual}$ domains. More recent work on $\emph{code-switched}$ communication has also shown preliminary evidence of entrainment on certain aspects of code-switching (CSW). However, such studies of entrainment in code-switched domains have been extremely few and restricted to human-machine textual interactions. Our work studies code-switched spontaneous speech between humans, finding that (1) patterns of written and spoken entrainment in monolingual settings largely generalize to code-switched settings, and (2) some patterns of entrainment on code-switching in dialogue agent-generated text generalize to spontaneous code-switched speech. Our findings give rise to important implications for the potentially {``}universal{''} nature of entrainment as a communication phenomenon, and potential applications in inclusive and interactive speech technology.
[ "Bhattacharya, Debasmita", "Ding, Siying", "Nguyen, Alayna", "Hirschberg, Julia" ]
Measuring Entrainment in Spontaneous Code-switched Speech
naacl-long.158
Poster
2311.07703
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.159.bib
https://aclanthology.org/2024.naacl-long.159/
@inproceedings{sadeddine-etal-2024-survey, title = "A Survey of Meaning Representations {--} From Theory to Practical Utility", author = "Sadeddine, Zacchary and Opitz, Juri and Suchanek, Fabian", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.159", doi = "10.18653/v1/2024.naacl-long.159", pages = "2877--2892", abstract = "Symbolic meaning representations of natural language text have been studied since at least the 1960s. With the availability of large annotated corpora, and more powerful machine learning tools, the field has recently seen several new developments. In this survey, we study today{'}s most prominent Meaning Representation Frameworks. We shed light on their theoretical properties, as well as on their practical research environment, i.e., on datasets, parsers, applications, and future challenges.", }
Symbolic meaning representations of natural language text have been studied since at least the 1960s. With the availability of large annotated corpora, and more powerful machine learning tools, the field has recently seen several new developments. In this survey, we study today{'}s most prominent Meaning Representation Frameworks. We shed light on their theoretical properties, as well as on their practical research environment, i.e., on datasets, parsers, applications, and future challenges.
[ "Sadeddine, Zacchary", "Opitz, Juri", "Suchanek, Fabian" ]
A Survey of Meaning Representations – From Theory to Practical Utility
naacl-long.159
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.160.bib
https://aclanthology.org/2024.naacl-long.160/
@inproceedings{zhao-etal-2024-mitigating, title = "Mitigating Language-Level Performance Disparity in m{PLM}s via Teacher Language Selection and Cross-lingual Self-Distillation", author = "Zhao, Haozhe and Cai, Zefan and Si, Shuzheng and Chen, Liang and He, Yufeng and An, Kaikai and Chang, Baobao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.160", doi = "10.18653/v1/2024.naacl-long.160", pages = "2893--2907", abstract = "Large-scale multilingual Pretrained Language Models (mPLMs) yield impressive performance on cross-language tasks, yet significant performance disparities exist across different languages within the same mPLM. Previous studies endeavored to narrow these disparities by supervise fine-tuning the mPLMs with multilingual data.However, obtaining labeled multilingual data is time-consuming, and fine-tuning mPLM with limited labeled multilingual data merely encapsulates the knowledge specific to the labeled data.Therefore, we introduce **ALSACE** to leverage the learned knowledge from the well-performing languages to guide under-performing ones within the same mPLM, eliminating the need for additional labeled multilingual data. Experiments show that ALSACE effectively mitigates language-level performance disparity across various mPLMs while showing the competitive performance on different multilingual NLU tasks, ranging from full resource to limited resource settings. The code for our approach is available at https://github.com/pkunlp-icler/ALSACE.", }
Large-scale multilingual Pretrained Language Models (mPLMs) yield impressive performance on cross-language tasks, yet significant performance disparities exist across different languages within the same mPLM. Previous studies endeavored to narrow these disparities by supervise fine-tuning the mPLMs with multilingual data.However, obtaining labeled multilingual data is time-consuming, and fine-tuning mPLM with limited labeled multilingual data merely encapsulates the knowledge specific to the labeled data.Therefore, we introduce **ALSACE** to leverage the learned knowledge from the well-performing languages to guide under-performing ones within the same mPLM, eliminating the need for additional labeled multilingual data. Experiments show that ALSACE effectively mitigates language-level performance disparity across various mPLMs while showing the competitive performance on different multilingual NLU tasks, ranging from full resource to limited resource settings. The code for our approach is available at https://github.com/pkunlp-icler/ALSACE.
[ "Zhao, Haozhe", "Cai, Zefan", "Si, Shuzheng", "Chen, Liang", "He, Yufeng", "An, Kaikai", "Chang, Baobao" ]
Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
naacl-long.160
Poster
2404.08491
[ "https://github.com/pkunlp-icler/alsace" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.161.bib
https://aclanthology.org/2024.naacl-long.161/
@inproceedings{patel-etal-2024-evaluating, title = "Evaluating In-Context Learning of Libraries for Code Generation", author = "Patel, Arkil and Reddy, Siva and Bahdanau, Dzmitry and Dasigi, Pradeep", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.161", doi = "10.18653/v1/2024.naacl-long.161", pages = "2908--2926", abstract = "Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly promising area is their ability to interpret code modules from unfamiliar libraries for solving user-instructed tasks. Recent work has shown that large proprietary LLMs can learn novel library usage in-context from demonstrations. These results raise several open questions: whether demonstrations of library usage is required, whether smaller (and more open) models also possess such capabilities, etc. In this work, we take a broader approach by systematically evaluating a diverse array of LLMs across three scenarios reflecting varying levels of domain specialization to understand their abilities and limitations in generating code based on libraries defined in-context. Our results show that even smaller open-source LLMs like Llama-2 and StarCoder demonstrate an adept understanding of novel code libraries based on specification presented in-context. Our findings further reveal that LLMs exhibit a surprisingly high proficiency in learning novel library modules even when provided with just natural language descriptions or raw code implementations of the functions, which are often cheaper to obtain than demonstrations. Overall, our results pave the way for harnessing LLMs in more adaptable and dynamic coding environments.", }
Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly promising area is their ability to interpret code modules from unfamiliar libraries for solving user-instructed tasks. Recent work has shown that large proprietary LLMs can learn novel library usage in-context from demonstrations. These results raise several open questions: whether demonstrations of library usage is required, whether smaller (and more open) models also possess such capabilities, etc. In this work, we take a broader approach by systematically evaluating a diverse array of LLMs across three scenarios reflecting varying levels of domain specialization to understand their abilities and limitations in generating code based on libraries defined in-context. Our results show that even smaller open-source LLMs like Llama-2 and StarCoder demonstrate an adept understanding of novel code libraries based on specification presented in-context. Our findings further reveal that LLMs exhibit a surprisingly high proficiency in learning novel library modules even when provided with just natural language descriptions or raw code implementations of the functions, which are often cheaper to obtain than demonstrations. Overall, our results pave the way for harnessing LLMs in more adaptable and dynamic coding environments.
[ "Patel, Arkil", "Reddy, Siva", "Bahdanau, Dzmitry", "Dasigi, Pradeep" ]
Evaluating In-Context Learning of Libraries for Code Generation
naacl-long.161
Poster
2311.09635
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.162.bib
https://aclanthology.org/2024.naacl-long.162/
@inproceedings{qu-etal-2024-visually, title = "Visually-Aware Context Modeling for News Image Captioning", author = "Qu, Tingyu and Tuytelaars, Tinne and Moens, Marie-Francine", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.162", doi = "10.18653/v1/2024.naacl-long.162", pages = "2927--2943", abstract = "News Image Captioning aims to create captions from news articles and images, emphasizing the connection between textual context and visual elements. Recognizing the significance of human faces in news images and the face-name co-occurrence pattern in existing datasets, we propose a face-naming module for learning better name embeddings. Apart from names, which can be directly linked to an image area (faces), news image captions mostly contain context information that can only be found in the article. We design a retrieval strategy using CLIP to retrieve sentences that are semantically close to the image, mimicking human thought process of linking articles to images. Furthermore, to tackle the problem of the imbalanced proportion of article context and image context in captions, we introduce a simple yet effective method Contrasting with Language Model backbone (CoLaM) to the training pipeline. We conduct extensive experiments to demonstrate the efficacy of our framework. We out-perform the previous state-of-the-art (without external data) by 7.97/5.80 CIDEr scores on GoodNews/NYTimes800k. Our code is available at https://github.com/tingyu215/VACNIC.", }
News Image Captioning aims to create captions from news articles and images, emphasizing the connection between textual context and visual elements. Recognizing the significance of human faces in news images and the face-name co-occurrence pattern in existing datasets, we propose a face-naming module for learning better name embeddings. Apart from names, which can be directly linked to an image area (faces), news image captions mostly contain context information that can only be found in the article. We design a retrieval strategy using CLIP to retrieve sentences that are semantically close to the image, mimicking human thought process of linking articles to images. Furthermore, to tackle the problem of the imbalanced proportion of article context and image context in captions, we introduce a simple yet effective method Contrasting with Language Model backbone (CoLaM) to the training pipeline. We conduct extensive experiments to demonstrate the efficacy of our framework. We out-perform the previous state-of-the-art (without external data) by 7.97/5.80 CIDEr scores on GoodNews/NYTimes800k. Our code is available at https://github.com/tingyu215/VACNIC.
[ "Qu, Tingyu", "Tuytelaars, Tinne", "Moens, Marie-Francine" ]
Visually-Aware Context Modeling for News Image Captioning
naacl-long.162
Poster
2308.08325
[ "https://github.com/tingyu215/vacnic" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.163.bib
https://aclanthology.org/2024.naacl-long.163/
@inproceedings{jacob-etal-2024-regularized, title = "Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning", author = "Jacob, Athul and Farina, Gabriele and Andreas, Jacob", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.163", doi = "10.18653/v1/2024.naacl-long.163", pages = "2944--2955", abstract = "We present a game-theoretic model of pragmatics that we call ReCo (for Regularized Conventions). This model formulates pragmatic communication as a game in which players are rewarded for communicating successfully and penalized for deviating from a shared, {``}default{''} semantics. As a result, players assign utterances context-dependent meanings that jointly optimize communicative success and naturalness with respect to speakers{'} and listeners{'} background knowledge of language. By using established game-theoretic tools to compute equilibrium strategies for this game, we obtain principled pragmatic language generation procedures with formal guarantees of communicative success. Across several datasets capturing real and idealized human judgments about pragmatic implicature, ReCo matches, or slightly improves upon, predictions made by Iterated Best Response and Rational Speech Acts models of language understanding.", }
We present a game-theoretic model of pragmatics that we call ReCo (for Regularized Conventions). This model formulates pragmatic communication as a game in which players are rewarded for communicating successfully and penalized for deviating from a shared, {``}default{''} semantics. As a result, players assign utterances context-dependent meanings that jointly optimize communicative success and naturalness with respect to speakers{'} and listeners{'} background knowledge of language. By using established game-theoretic tools to compute equilibrium strategies for this game, we obtain principled pragmatic language generation procedures with formal guarantees of communicative success. Across several datasets capturing real and idealized human judgments about pragmatic implicature, ReCo matches, or slightly improves upon, predictions made by Iterated Best Response and Rational Speech Acts models of language understanding.
[ "Jacob, Athul", "Farina, Gabriele", "Andreas, Jacob" ]
Regularized Conventions: Equilibrium Computation as a Model of Pragmatic Reasoning
naacl-long.163
Poster
2311.09712
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.164.bib
https://aclanthology.org/2024.naacl-long.164/
@inproceedings{pham-etal-2024-topicgpt, title = "{T}opic{GPT}: A Prompt-based Topic Modeling Framework", author = "Pham, Chau and Hoyle, Alexander and Sun, Simeng and Resnik, Philip and Iyyer, Mohit", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.164", doi = "10.18653/v1/2024.naacl-long.164", pages = "2956--2984", abstract = "Topic modeling is a well-established technique for exploring text corpora. Conventional topic models (e.g., LDA) represent topics as bags of words that often require {``}reading the tea leaves{''} to interpret; additionally, they offer users minimal control over the formatting and specificity of resulting topics. To tackle these issues, we introduce TopicGPT, a prompt-based framework that uses large language models (LLMs) to uncover latent topics in a text collection. TopicGPT produces topics that align better with human categorizations compared to competing methods: it achieves a harmonic mean purity of 0.74 against human-annotated Wikipedia topics compared to 0.64 for the strongest baseline. Its topics are also more interpretable, dispensing with ambiguous bags of words in favor of topics with natural language labels and associated free-form descriptions. Moreover, the framework is highly adaptable, allowing users to specify constraints and modify topics without the need for model retraining. By streamlining access to high-quality and interpretable topics, TopicGPT represents a compelling, human-centered approach to topic modeling.", }
Topic modeling is a well-established technique for exploring text corpora. Conventional topic models (e.g., LDA) represent topics as bags of words that often require {``}reading the tea leaves{''} to interpret; additionally, they offer users minimal control over the formatting and specificity of resulting topics. To tackle these issues, we introduce TopicGPT, a prompt-based framework that uses large language models (LLMs) to uncover latent topics in a text collection. TopicGPT produces topics that align better with human categorizations compared to competing methods: it achieves a harmonic mean purity of 0.74 against human-annotated Wikipedia topics compared to 0.64 for the strongest baseline. Its topics are also more interpretable, dispensing with ambiguous bags of words in favor of topics with natural language labels and associated free-form descriptions. Moreover, the framework is highly adaptable, allowing users to specify constraints and modify topics without the need for model retraining. By streamlining access to high-quality and interpretable topics, TopicGPT represents a compelling, human-centered approach to topic modeling.
[ "Pham, Chau", "Hoyle, Alex", "er", "Sun, Simeng", "Resnik, Philip", "Iyyer, Mohit" ]
TopicGPT: A Prompt-based Topic Modeling Framework
naacl-long.164
Poster
2311.01449
[ "https://github.com/chtmp223/topicgpt" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.165.bib
https://aclanthology.org/2024.naacl-long.165/
@inproceedings{li-etal-2024-chatgpt, title = "{C}hat{GPT} as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger", author = "Li, Jiazhao and Yang, Yijin and Wu, Zhuofeng and Vydiswaran, V.G.Vinod and Xiao, Chaowei", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.165", doi = "10.18653/v1/2024.naacl-long.165", pages = "2985--3004", abstract = "Textual backdoor attacks, characterized by subtle manipulations of input triggers and training dataset labels, pose significant threats to security-sensitive applications. The rise of advanced generative models, such as GPT-4, with their capacity for human-like rewriting, makes these attacks increasingly challenging to detect. In this study, we conduct an in-depth examination of black-box generative models as tools for backdoor attacks, thereby emphasizing the need for effective defense strategies. We propose BGMAttack, a novel framework that harnesses advanced generative models to execute stealthier backdoor attacks on text classifiers. Unlike prior approaches constrained by subpar generation quality, BGMAttack renders backdoor triggers more elusive to human cognition and advanced machine detection. A rigorous evaluation of attack effectiveness over four sentiment classification tasks, complemented by four human cognition stealthiness tests, reveals BGMAttack{'}s superior performance, achieving a state-of-the-art attack success rate of 97.35{\%} on average while maintaining superior stealth compared to conventional methods. The dataset and code are available: https://github.com/JiazhaoLi/BGMAttack.", }
Textual backdoor attacks, characterized by subtle manipulations of input triggers and training dataset labels, pose significant threats to security-sensitive applications. The rise of advanced generative models, such as GPT-4, with their capacity for human-like rewriting, makes these attacks increasingly challenging to detect. In this study, we conduct an in-depth examination of black-box generative models as tools for backdoor attacks, thereby emphasizing the need for effective defense strategies. We propose BGMAttack, a novel framework that harnesses advanced generative models to execute stealthier backdoor attacks on text classifiers. Unlike prior approaches constrained by subpar generation quality, BGMAttack renders backdoor triggers more elusive to human cognition and advanced machine detection. A rigorous evaluation of attack effectiveness over four sentiment classification tasks, complemented by four human cognition stealthiness tests, reveals BGMAttack{'}s superior performance, achieving a state-of-the-art attack success rate of 97.35{\%} on average while maintaining superior stealth compared to conventional methods. The dataset and code are available: https://github.com/JiazhaoLi/BGMAttack.
[ "Li, Jiazhao", "Yang, Yijin", "Wu, Zhuofeng", "Vydiswaran, V.G.Vinod", "Xiao, Chaowei" ]
ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger
naacl-long.165
Poster
2304.14475
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.166.bib
https://aclanthology.org/2024.naacl-long.166/
@inproceedings{zhou-etal-2024-social, title = "Social Meme-ing: Measuring Linguistic Variation in Memes", author = "Zhou, Naitian and Jurgens, David and Bamman, David", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.166", doi = "10.18653/v1/2024.naacl-long.166", pages = "3005--3024", abstract = "Much work in the space of NLP has used computational methods to explore sociolinguistic variation in text. In this paper, we argue that memes, as multimodal forms of language comprised of visual templates and text, also exhibit meaningful social variation. We construct a computational pipeline to cluster individual instances of memes into templates and semantic variables, taking advantage of their multimodal structure in doing so. We apply this method to a large collection of meme images from Reddit and make available the resulting SemanticMemes dataset of 3.8M images clustered by their semantic function. We use these clusters to analyze linguistic variation in memes, discovering not only that socially meaningful variation in meme usage exists between subreddits, but that patterns of meme innovation and acculturation within these communities align with previous findings on written language.", }
Much work in the space of NLP has used computational methods to explore sociolinguistic variation in text. In this paper, we argue that memes, as multimodal forms of language comprised of visual templates and text, also exhibit meaningful social variation. We construct a computational pipeline to cluster individual instances of memes into templates and semantic variables, taking advantage of their multimodal structure in doing so. We apply this method to a large collection of meme images from Reddit and make available the resulting SemanticMemes dataset of 3.8M images clustered by their semantic function. We use these clusters to analyze linguistic variation in memes, discovering not only that socially meaningful variation in meme usage exists between subreddits, but that patterns of meme innovation and acculturation within these communities align with previous findings on written language.
[ "Zhou, Naitian", "Jurgens, David", "Bamman, David" ]
Social Meme-ing: Measuring Linguistic Variation in Memes
naacl-long.166
Poster
2311.09130
[ "https://github.com/naitian/semantic-memes" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.167.bib
https://aclanthology.org/2024.naacl-long.167/
@inproceedings{malaviya-etal-2024-expertqa, title = "{E}xpert{QA}: Expert-Curated Questions and Attributed Answers", author = "Malaviya, Chaitanya and Lee, Subin and Chen, Sihao and Sieber, Elizabeth and Yatskar, Mark and Roth, Dan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.167", doi = "10.18653/v1/2024.naacl-long.167", pages = "3025--3045", abstract = "As language models are adopted by a more sophisticated and diverse set of users, the importance of guaranteeing that they provide factually correct information supported by verifiable sources is critical across fields of study. This is especially the case for high-stakes fields, such as medicine and law, where the risk of propagating false information is high and can lead to undesirable societal consequences. Previous work studying attribution and factuality has not focused on analyzing these characteristics of language model outputs in domain-specific scenarios. In this work, we conduct human evaluation of responses from a few representative systems along various axes of attribution and factuality, by bringing domain experts in the loop. Specifically, we collect expert-curated questions from 484 participants across 32 fields of study, and then ask the same experts to evaluate generated responses to their own questions. In addition, we ask experts to improve upon responses from language models. The output of our analysis is ExpertQA, a high-quality long-form QA dataset with 2177 questions spanning 32 fields, along with verified answers and attributions for claims in the answers.", }
As language models are adopted by a more sophisticated and diverse set of users, the importance of guaranteeing that they provide factually correct information supported by verifiable sources is critical across fields of study. This is especially the case for high-stakes fields, such as medicine and law, where the risk of propagating false information is high and can lead to undesirable societal consequences. Previous work studying attribution and factuality has not focused on analyzing these characteristics of language model outputs in domain-specific scenarios. In this work, we conduct human evaluation of responses from a few representative systems along various axes of attribution and factuality, by bringing domain experts in the loop. Specifically, we collect expert-curated questions from 484 participants across 32 fields of study, and then ask the same experts to evaluate generated responses to their own questions. In addition, we ask experts to improve upon responses from language models. The output of our analysis is ExpertQA, a high-quality long-form QA dataset with 2177 questions spanning 32 fields, along with verified answers and attributions for claims in the answers.
[ "Malaviya, Chaitanya", "Lee, Subin", "Chen, Sihao", "Sieber, Elizabeth", "Yatskar, Mark", "Roth, Dan" ]
ExpertQA: Expert-Curated Questions and Attributed Answers
naacl-long.167
Oral
2309.07852
[ "https://github.com/chaitanyamalaviya/expertqa" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.168.bib
https://aclanthology.org/2024.naacl-long.168/
@inproceedings{malaviya-etal-2024-said, title = "What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception", author = "Malaviya, Chaitanya and Lee, Subin and Roth, Dan and Yatskar, Mark", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.168", doi = "10.18653/v1/2024.naacl-long.168", pages = "3046--3065", abstract = "Eliciting feedback from end users of NLP models can be beneficial for improving models. However, how should we present model responses to users so they are most amenable to be corrected from user feedback? Further, what properties do users value to understand and trust responses? We answer these questions by analyzing the effect of rationales (or explanations) generated by QA models to support their answers. We specifically consider decomposed QA models that first extract an intermediate rationale based on a context and a question and then use solely this rationale to answer the question. A rationale outlines the approach followed by the model to answer the question. Our work considers various formats of these rationales that vary according to well-defined properties of interest. We sample rationales from language models using few-shot prompting for two datasets, and then perform two user studies. First, we present users with incorrect answers and corresponding rationales in various formats and ask them to provide natural language feedback to revise the rationale. We then measure the effectiveness of this feedback in patching these rationales through in-context learning. The second study evaluates how well different rationale formats enable users to understand and trust model answers, when they are correct. We find that rationale formats significantly affect how easy it is (1) for users to give feedback for rationales, and (2) for models to subsequently execute this feedback. In addition, formats with attributions to the context and in-depth reasoning significantly enhance user-reported understanding and trust of model outputs.", }
Eliciting feedback from end users of NLP models can be beneficial for improving models. However, how should we present model responses to users so they are most amenable to be corrected from user feedback? Further, what properties do users value to understand and trust responses? We answer these questions by analyzing the effect of rationales (or explanations) generated by QA models to support their answers. We specifically consider decomposed QA models that first extract an intermediate rationale based on a context and a question and then use solely this rationale to answer the question. A rationale outlines the approach followed by the model to answer the question. Our work considers various formats of these rationales that vary according to well-defined properties of interest. We sample rationales from language models using few-shot prompting for two datasets, and then perform two user studies. First, we present users with incorrect answers and corresponding rationales in various formats and ask them to provide natural language feedback to revise the rationale. We then measure the effectiveness of this feedback in patching these rationales through in-context learning. The second study evaluates how well different rationale formats enable users to understand and trust model answers, when they are correct. We find that rationale formats significantly affect how easy it is (1) for users to give feedback for rationales, and (2) for models to subsequently execute this feedback. In addition, formats with attributions to the context and in-depth reasoning significantly enhance user-reported understanding and trust of model outputs.
[ "Malaviya, Chaitanya", "Lee, Subin", "Roth, Dan", "Yatskar, Mark" ]
What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception
naacl-long.168
Poster
2311.09558
[ "https://github.com/chaitanyamalaviya/pachinko" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.169.bib
https://aclanthology.org/2024.naacl-long.169/
@inproceedings{shi-etal-2024-life, title = "When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad Responses into Good Labels", author = "Shi, Weiyan and Dinan, Emily and Shuster, Kurt and Weston, Jason and Xu, Jing", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.169", doi = "10.18653/v1/2024.naacl-long.169", pages = "3066--3082", abstract = "Deployed dialogue agents have the potential to integrate human feedback to continuously improve themselves. However, humans may not always provide explicit signals when the chatbot makes mistakes during interactions. In this work, we propose Juicer, a framework to make use of both binary and free-form textual human feedback. It works by: (i) extending sparse binary feedback by training a satisfaction classifier to label the unlabeled data; and (ii) training a reply corrector to map the bad replies to good ones. We find that augmenting training with model-corrected replies improves the final dialogue model, and we can further improve performance by using both positive and negative replies through the recently proposed Director model.", }
Deployed dialogue agents have the potential to integrate human feedback to continuously improve themselves. However, humans may not always provide explicit signals when the chatbot makes mistakes during interactions. In this work, we propose Juicer, a framework to make use of both binary and free-form textual human feedback. It works by: (i) extending sparse binary feedback by training a satisfaction classifier to label the unlabeled data; and (ii) training a reply corrector to map the bad replies to good ones. We find that augmenting training with model-corrected replies improves the final dialogue model, and we can further improve performance by using both positive and negative replies through the recently proposed Director model.
[ "Shi, Weiyan", "Dinan, Emily", "Shuster, Kurt", "Weston, Jason", "Xu, Jing" ]
When Life Gives You Lemons, Make Cherryade: Converting Feedback from Bad Responses into Good Labels
naacl-long.169
Poster
2210.15893
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.170.bib
https://aclanthology.org/2024.naacl-long.170/
@inproceedings{robinson-etal-2024-kreyol, title = "Krey{\`o}l-{MT}: Building {MT} for {L}atin {A}merican, {C}aribbean and Colonial {A}frican Creole Languages", author = {Robinson, Nathaniel and Dabre, Raj and Shurtz, Ammon and Dent, Rasul and Onesi, Onenamiyi and Monroc, Claire and Grobol, Lo{\"\i}c and Muhammad, Hasan and Garg, Ashi and Etori, Naome and Tiyyala, Vijay Murari and Samuel, Olanrewaju and Stutzman, Matthew and Odoom, Bismarck and Khudanpur, Sanjeev and Richardson, Stephen and Murray, Kenton}, editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.170", doi = "10.18653/v1/2024.naacl-long.170", pages = "3083--3110", abstract = "A majority of language technologies are tailored for a small number of high-resource languages, while relatively many low-resource languages are neglected. One such group, Creole languages, have long been marginalized in academic study, though their speakers could benefit from machine translation (MT). These languages are predominantly used in much of Latin America, Africa and the Caribbean. We present the largest cumulative dataset to date for Creole language MT, including 14.5M unique Creole sentences with parallel translations{---}11.6M of which we release publicly, and the largest bitexts gathered to date for 41 languages{---}the first ever for 21. In addition, we provide MT models supporting all 41 Creole languages in 172 translation directions. Given our diverse dataset, we produce a model for Creole language MT exposed to more genre diversity then ever before, which outperforms a genre-specific Creole MT model on its own benchmark for 23 of 34 translation directions.", }
A majority of language technologies are tailored for a small number of high-resource languages, while relatively many low-resource languages are neglected. One such group, Creole languages, have long been marginalized in academic study, though their speakers could benefit from machine translation (MT). These languages are predominantly used in much of Latin America, Africa and the Caribbean. We present the largest cumulative dataset to date for Creole language MT, including 14.5M unique Creole sentences with parallel translations{---}11.6M of which we release publicly, and the largest bitexts gathered to date for 41 languages{---}the first ever for 21. In addition, we provide MT models supporting all 41 Creole languages in 172 translation directions. Given our diverse dataset, we produce a model for Creole language MT exposed to more genre diversity then ever before, which outperforms a genre-specific Creole MT model on its own benchmark for 23 of 34 translation directions.
[ "Robinson, Nathaniel", "Dabre, Raj", "Shurtz, Ammon", "Dent, Rasul", "Onesi, Onenamiyi", "Monroc, Claire", "Grobol, Lo{\\\"\\i}c", "Muhammad, Hasan", "Garg, Ashi", "Etori, Naome", "Tiyyala, Vijay Murari", "Samuel, Olanrewaju", "Stutzman, Matthew", "Odoom, Bismarck", "Khudanpur, Sanjeev", "Richardson, Stephen", "Murray, Kenton" ]
Kreyòl-MT: Building MT for Latin American, Caribbean and Colonial African Creole Languages
naacl-long.170
Oral
2405.05376
[ "https://github.com/jhu-clsp/kreyol-mt" ]
https://huggingface.co/papers/2405.05376
0
0
0
17
1
[ "jhu-clsp/kreyol-mt", "jhu-clsp/kreyol-mt-scratch", "jhu-clsp/kreyol-mt-scratch-pubtrain", "jhu-clsp/kreyol-mt-pubtrain" ]
[ "jhu-clsp/kreyol-mt" ]
[]
https://aclanthology.org/2024.naacl-long.171.bib
https://aclanthology.org/2024.naacl-long.171/
@inproceedings{xu-etal-2024-instructions, title = "Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models", author = "Xu, Jiashu and Ma, Mingyu and Wang, Fei and Xiao, Chaowei and Chen, Muhao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.171", doi = "10.18653/v1/2024.naacl-long.171", pages = "3111--3126", abstract = "We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions ({\textasciitilde}1000 tokens) and control model behavior through data poisoning, without even the need to modify data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90{\%} attack success rate across four commonly used NLP datasets. As an empirical study on instruction attacks, we systematically evaluated unique perspectives of instruction attacks, such as poison transfer where poisoned models can transfer to 15 diverse generative datasets in a zero-shot manner; instruction transfer where attackers can directly apply poisoned instruction on many other datasets; and poison resistance to continual finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate such backdoors to some degree. These findings highlight the need for more robust defenses against poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.", }
We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions ({\textasciitilde}1000 tokens) and control model behavior through data poisoning, without even the need to modify data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90{\%} attack success rate across four commonly used NLP datasets. As an empirical study on instruction attacks, we systematically evaluated unique perspectives of instruction attacks, such as poison transfer where poisoned models can transfer to 15 diverse generative datasets in a zero-shot manner; instruction transfer where attackers can directly apply poisoned instruction on many other datasets; and poison resistance to continual finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate such backdoors to some degree. These findings highlight the need for more robust defenses against poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.
[ "Xu, Jiashu", "Ma, Mingyu", "Wang, Fei", "Xiao, Chaowei", "Chen, Muhao" ]
Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models
naacl-long.171
Poster
2305.14710
[ "" ]
https://huggingface.co/papers/2305.14710
2
2
0
5
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.172.bib
https://aclanthology.org/2024.naacl-long.172/
@inproceedings{yang-jurgens-2024-modeling, title = "Modeling Empathetic Alignment in Conversation", author = "Yang, Jiamin and Jurgens, David", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.172", doi = "10.18653/v1/2024.naacl-long.172", pages = "3127--3148", abstract = "Empathy requires perspective-taking: empathetic responses require a person to reason about what another has experienced and communicate that understanding in language. However, most NLP approaches to empathy do not explicitly model this alignment process. Here, we introduce a new approach to recognizing alignment in empathetic speech, grounded in Appraisal Theory. We introduce a new dataset of over 9.2K span-level annotations of different types of appraisals of a person{'}s experience and over 3K empathetic alignments between a speaker{'}s and observer{'}s speech. Through computational experiments, we show that these appraisals and alignments can be accurately recognized. In experiments in over 9.2M Reddit conversations, we find that appraisals capture meaningful groupings of behavior but that most responses have minimal alignment. However, we find that mental health professionals engage with substantially more empathetic alignment.", }
Empathy requires perspective-taking: empathetic responses require a person to reason about what another has experienced and communicate that understanding in language. However, most NLP approaches to empathy do not explicitly model this alignment process. Here, we introduce a new approach to recognizing alignment in empathetic speech, grounded in Appraisal Theory. We introduce a new dataset of over 9.2K span-level annotations of different types of appraisals of a person{'}s experience and over 3K empathetic alignments between a speaker{'}s and observer{'}s speech. Through computational experiments, we show that these appraisals and alignments can be accurately recognized. In experiments in over 9.2M Reddit conversations, we find that appraisals capture meaningful groupings of behavior but that most responses have minimal alignment. However, we find that mental health professionals engage with substantially more empathetic alignment.
[ "Yang, Jiamin", "Jurgens, David" ]
Modeling Empathetic Alignment in Conversation
naacl-long.172
Oral
2405.00948
[ "https://github.com/jessicayjm/span_alignment_annotation_tool" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.173.bib
https://aclanthology.org/2024.naacl-long.173/
@inproceedings{goswami-etal-2024-native, title = "Native Language Identification in Texts: A Survey", author = "Goswami, Dhiman and Thilagan, Sharanya and North, Kai and Malmasi, Shervin and Zampieri, Marcos", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.173", doi = "10.18653/v1/2024.naacl-long.173", pages = "3149--3160", abstract = "We present the first comprehensive survey of Native Language Identification (NLI) applied to texts. NLI is the task of automatically identifying an author{'}s native language (L1) based on their second language (L2) production. NLI is an important task with practical applications in second language teaching and NLP. The task has been widely studied for both text and speech, particularly for L2 English due to the availability of suitable corpora. Speech-based NLI relies heavily on accent modeled by pronunciation patterns and prosodic cues while text-based NLI relies primarily on modeling spelling errors and grammatical patterns that reveal properties of an individuals{'} L1 influencing L2 production. We survey over one hundred papers on the topic including the papers associated with the NLI and INLI shared tasks. We describe several text representations and computational techniques used in text-based NLI. Finally, we present a comprehensive account of publicly available datasets used for the task thus far.", }
We present the first comprehensive survey of Native Language Identification (NLI) applied to texts. NLI is the task of automatically identifying an author{'}s native language (L1) based on their second language (L2) production. NLI is an important task with practical applications in second language teaching and NLP. The task has been widely studied for both text and speech, particularly for L2 English due to the availability of suitable corpora. Speech-based NLI relies heavily on accent modeled by pronunciation patterns and prosodic cues while text-based NLI relies primarily on modeling spelling errors and grammatical patterns that reveal properties of an individuals{'} L1 influencing L2 production. We survey over one hundred papers on the topic including the papers associated with the NLI and INLI shared tasks. We describe several text representations and computational techniques used in text-based NLI. Finally, we present a comprehensive account of publicly available datasets used for the task thus far.
[ "Goswami, Dhiman", "Thilagan, Sharanya", "North, Kai", "Malmasi, Shervin", "Zampieri, Marcos" ]
Native Language Identification in Texts: A Survey
naacl-long.173
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.174.bib
https://aclanthology.org/2024.naacl-long.174/
@inproceedings{yang-etal-2024-loretta, title = "{L}o{RETTA}: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models", author = "Yang, Yifan and Zhou, Jiajun and Wong, Ngai and Zhang, Zheng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.174", doi = "10.18653/v1/2024.naacl-long.174", pages = "3161--3176", abstract = "Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance. However, existing PEFT methods are still limited by the growing number of trainable parameters with the rapid deployment of Large Language Models (LLMs). To address this challenge, we present LoRETTA, an ultra-parameter-efficient framework that significantly reduces trainable parameters through tensor-train decomposition. Specifically, we propose two methods, named LoRETTA{\_}adp and LoRETTA{\_}rep. The former employs tensorized adapters, offering a high-performance yet lightweight approach for the fine-tuning of LLMs. The latter emphasizes fine-tuning via weight reparameterization with a set of small tensor factors. LoRETTA achieves comparable or better performance than most widely used PEFT methods with up to $100\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical results demonstrate that the proposed methods exhibit remarkable anti-overfitting capability, effectively improve training efficiency, and enjoy better multi-task learning performance. Plug-and-play loretta library built upon the Huggingface framework and PEFT library are provided.", }
Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance. However, existing PEFT methods are still limited by the growing number of trainable parameters with the rapid deployment of Large Language Models (LLMs). To address this challenge, we present LoRETTA, an ultra-parameter-efficient framework that significantly reduces trainable parameters through tensor-train decomposition. Specifically, we propose two methods, named LoRETTA{\_}adp and LoRETTA{\_}rep. The former employs tensorized adapters, offering a high-performance yet lightweight approach for the fine-tuning of LLMs. The latter emphasizes fine-tuning via weight reparameterization with a set of small tensor factors. LoRETTA achieves comparable or better performance than most widely used PEFT methods with up to $100\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical results demonstrate that the proposed methods exhibit remarkable anti-overfitting capability, effectively improve training efficiency, and enjoy better multi-task learning performance. Plug-and-play loretta library built upon the Huggingface framework and PEFT library are provided.
[ "Yang, Yifan", "Zhou, Jiajun", "Wong, Ngai", "Zhang, Zheng" ]
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
naacl-long.174
Oral
2402.11417
[ "https://github.com/yifanycc/loretta" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.175.bib
https://aclanthology.org/2024.naacl-long.175/
@inproceedings{mitra-etal-2024-one, title = "Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding", author = "Mitra, Chancharik and Anwar, Abrar and Corona, Rodolfo and Klein, Dan and Darrell, Trevor and Thomason, Jesse", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.175", doi = "10.18653/v1/2024.naacl-long.175", pages = "3177--3189", abstract = "When connecting objects and their language referents in an embodied 3D environment, it is important to note that: (1) an object can be better characterized by leveraging comparative information between itself and other objects, and (2) an object{'}s appearance can vary with camera position. As such, we present the Multi-view Approach to Grounding in Context (MAGiC) model, which selects an object referent based on language that distinguishes between two similar objects. By pragmatically reasoning over both objects and across multiple views of those objects, MAGiC improves over the state-of-the-art model on the SNARE object reference task with a relative error reduction of 12.9{\%} (representing an absolute improvement of 2.7{\%}). Ablation studies show that reasoning jointly over object referent candidates and multiple views of each object both contribute to improved accuracy. Code: https://github.com/rcorona/magic{\_}snare/", }
When connecting objects and their language referents in an embodied 3D environment, it is important to note that: (1) an object can be better characterized by leveraging comparative information between itself and other objects, and (2) an object{'}s appearance can vary with camera position. As such, we present the Multi-view Approach to Grounding in Context (MAGiC) model, which selects an object referent based on language that distinguishes between two similar objects. By pragmatically reasoning over both objects and across multiple views of those objects, MAGiC improves over the state-of-the-art model on the SNARE object reference task with a relative error reduction of 12.9{\%} (representing an absolute improvement of 2.7{\%}). Ablation studies show that reasoning jointly over object referent candidates and multiple views of each object both contribute to improved accuracy. Code: https://github.com/rcorona/magic{\_}snare/
[ "Mitra, Chancharik", "Anwar, Abrar", "Corona, Rodolfo", "Klein, Dan", "Darrell, Trevor", "Thomason, Jesse" ]
Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding
naacl-long.175
Poster
2311.06694
[ "https://github.com/snaredataset/snare" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.176.bib
https://aclanthology.org/2024.naacl-long.176/
@inproceedings{chang-etal-2024-localization, title = "Do Localization Methods Actually Localize Memorized Data in {LLM}s? A Tale of Two Benchmarks", author = "Chang, Ting-Yun and Thomason, Jesse and Jia, Robin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.176", doi = "10.18653/v1/2024.naacl-long.176", pages = "3190--3211", abstract = "The concept of localization in LLMs is often mentioned in prior work; however, methods for localization have never been systematically and directly evaluated. We propose two complementary benchmarks that evaluate the ability of localization methods to pinpoint LLM components responsible for memorized data. In our INJ benchmark, we actively inject a piece of new information into a small subset of LLM weights, enabling us to directly evaluate whether localization methods can identify these {``}ground truth{''} weights. In our DEL benchmark, we evaluate localization by measuring how much dropping out identified neurons deletes a memorized pretrained sequence. Despite their different perspectives, our two benchmarks yield consistent rankings of five localization methods. Methods adapted from network pruning perform well on both benchmarks, and all evaluated methods show promising localization ability. On the other hand, even successful methods identify neurons that are not specific to a single memorized sequence.", }
The concept of localization in LLMs is often mentioned in prior work; however, methods for localization have never been systematically and directly evaluated. We propose two complementary benchmarks that evaluate the ability of localization methods to pinpoint LLM components responsible for memorized data. In our INJ benchmark, we actively inject a piece of new information into a small subset of LLM weights, enabling us to directly evaluate whether localization methods can identify these {``}ground truth{''} weights. In our DEL benchmark, we evaluate localization by measuring how much dropping out identified neurons deletes a memorized pretrained sequence. Despite their different perspectives, our two benchmarks yield consistent rankings of five localization methods. Methods adapted from network pruning perform well on both benchmarks, and all evaluated methods show promising localization ability. On the other hand, even successful methods identify neurons that are not specific to a single memorized sequence.
[ "Chang, Ting-Yun", "Thomason, Jesse", "Jia, Robin" ]
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks
naacl-long.176
Oral
2311.09060
[ "https://github.com/terarachang/memdata" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.177.bib
https://aclanthology.org/2024.naacl-long.177/
@inproceedings{zhang-etal-2024-promptfix, title = "{P}rompt{F}ix: Few-shot Backdoor Removal via Adversarial Prompt Tuning", author = "Zhang, Tianrong and Xi, Zhaohan and Wang, Ting and Mitra, Prasenjit and Chen, Jinghui", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.177", doi = "10.18653/v1/2024.naacl-long.177", pages = "3212--3225", abstract = "Pre-trained language models (PLMs) have attracted enormous attention over the past few years with their unparalleled performances. Meanwhile, the soaring cost to train PLMs as well as their amazing generalizability have jointly contributed to few-shot fine-tuning and prompting as the most popular training paradigms for natural language processing (NLP) models. Nevertheless, existing studies have shown that these NLP models can be backdoored such that model behavior is manipulated when trigger tokens are presented.In this paper, we propose PromptFix, a novel backdoor mitigation strategy for NLP models via adversarial prompt-tuning in few-shot settings.Unlike existing NLP backdoor removal methods, which rely on accurate trigger inversion and subsequent model fine-tuning, PromptFix keeps the model parameters intact and only utilizes two extra sets of soft tokens which approximate the trigger and counteract it respectively. The use of soft tokens and adversarial optimization eliminates the need to enumerate possible backdoor configurations and enables an adaptive balance between trigger finding and preservation of performance.Experiments with various backdoor attacks validate the effectiveness of the proposed method and the performances when domain shift is present further shows PromptFix{'}s applicability to models pretrained on unknown data source which is the common case in prompt tuning scenarios.", }
Pre-trained language models (PLMs) have attracted enormous attention over the past few years with their unparalleled performances. Meanwhile, the soaring cost to train PLMs as well as their amazing generalizability have jointly contributed to few-shot fine-tuning and prompting as the most popular training paradigms for natural language processing (NLP) models. Nevertheless, existing studies have shown that these NLP models can be backdoored such that model behavior is manipulated when trigger tokens are presented.In this paper, we propose PromptFix, a novel backdoor mitigation strategy for NLP models via adversarial prompt-tuning in few-shot settings.Unlike existing NLP backdoor removal methods, which rely on accurate trigger inversion and subsequent model fine-tuning, PromptFix keeps the model parameters intact and only utilizes two extra sets of soft tokens which approximate the trigger and counteract it respectively. The use of soft tokens and adversarial optimization eliminates the need to enumerate possible backdoor configurations and enables an adaptive balance between trigger finding and preservation of performance.Experiments with various backdoor attacks validate the effectiveness of the proposed method and the performances when domain shift is present further shows PromptFix{'}s applicability to models pretrained on unknown data source which is the common case in prompt tuning scenarios.
[ "Zhang, Tianrong", "Xi, Zhaohan", "Wang, Ting", "Mitra, Prasenjit", "Chen, Jinghui" ]
PromptFix: Few-shot Backdoor Removal via Adversarial Prompt Tuning
naacl-long.177
Oral
2406.04478
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.178.bib
https://aclanthology.org/2024.naacl-long.178/
@inproceedings{zhao-aletras-2024-comparing, title = "Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models", author = "Zhao, Zhixue and Aletras, Nikolaos", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.178", doi = "10.18653/v1/2024.naacl-long.178", pages = "3226--3244", abstract = "In many real natural language processing application scenarios, practitioners not only aim to maximize predictive performance but also seek faithful explanations for the model predictions. Rationales and importance distribution given by feature attribution methods (FAs) provide insights into how different parts of the input contribute to a prediction. Previous studies have explored how different factors affect faithfulness, mainly in the context of monolingual English models. On the other hand, the differences in FA faithfulness between multilingual and monolingual models have yet to be explored. Our extensive experiments, covering five languages and five popular FAs, show that FA faithfulness varies between multilingual and monolingual models. We find that the larger the multilingual model, the less faithful the FAs are compared to its counterpart monolingual models. Our further analysis shows that the faithfulness disparity is potentially driven by the differences between model tokenizers. Our code is available: https://github.com/casszhao/multilingual-faith.", }
In many real natural language processing application scenarios, practitioners not only aim to maximize predictive performance but also seek faithful explanations for the model predictions. Rationales and importance distribution given by feature attribution methods (FAs) provide insights into how different parts of the input contribute to a prediction. Previous studies have explored how different factors affect faithfulness, mainly in the context of monolingual English models. On the other hand, the differences in FA faithfulness between multilingual and monolingual models have yet to be explored. Our extensive experiments, covering five languages and five popular FAs, show that FA faithfulness varies between multilingual and monolingual models. We find that the larger the multilingual model, the less faithful the FAs are compared to its counterpart monolingual models. Our further analysis shows that the faithfulness disparity is potentially driven by the differences between model tokenizers. Our code is available: https://github.com/casszhao/multilingual-faith.
[ "Zhao, Zhixue", "Aletras, Nikolaos" ]
Comparing Explanation Faithfulness between Multilingual and Monolingual Fine-tuned Language Models
naacl-long.178
Oral
2403.12809
[ "https://github.com/casszhao/multilingual-faith" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.179.bib
https://aclanthology.org/2024.naacl-long.179/
@inproceedings{longpre-etal-2024-pretrainers, title = "A Pretrainer{'}s Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, {\&} Toxicity", author = "Longpre, Shayne and Yauney, Gregory and Reif, Emily and Lee, Katherine and Roberts, Adam and Zoph, Barret and Zhou, Denny and Wei, Jason and Robinson, Kevin and Mimno, David and Ippolito, Daphne", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.179", doi = "10.18653/v1/2024.naacl-long.179", pages = "3245--3276", abstract = "Pretraining data design is critically under-documented and often guided by empirically unsupported intuitions. We pretrain models on data curated (1) at different collection times, (2) with varying toxicity and quality filters, and (3) with different domain compositions. First, we find that temporal shift between evaluation data and pretraining data leads to performance degradation, which is not overcome by finetuning. Second, we measure the effect of quality and toxicity filters, showing a trade-off between performance on standard benchmarks and risk of toxic generations. We also find that the effects of different types of filtering are not predictable from text domain characteristics. Third, we empirically validate that heterogeneous data sources, like books and web, are beneficial and warrant greater prioritization. To date, these experiments constitute the single largest publicly documented empirical study of the effects of pretraining data. Spanning 28 unique 1.5 billion parameter models pretrained from scratch, these findings validate, quantify, and expose many undocumented intuitions about text pretraining, which ultimately support more informed data-centric decisions in model development.", }
Pretraining data design is critically under-documented and often guided by empirically unsupported intuitions. We pretrain models on data curated (1) at different collection times, (2) with varying toxicity and quality filters, and (3) with different domain compositions. First, we find that temporal shift between evaluation data and pretraining data leads to performance degradation, which is not overcome by finetuning. Second, we measure the effect of quality and toxicity filters, showing a trade-off between performance on standard benchmarks and risk of toxic generations. We also find that the effects of different types of filtering are not predictable from text domain characteristics. Third, we empirically validate that heterogeneous data sources, like books and web, are beneficial and warrant greater prioritization. To date, these experiments constitute the single largest publicly documented empirical study of the effects of pretraining data. Spanning 28 unique 1.5 billion parameter models pretrained from scratch, these findings validate, quantify, and expose many undocumented intuitions about text pretraining, which ultimately support more informed data-centric decisions in model development.
[ "Longpre, Shayne", "Yauney, Gregory", "Reif, Emily", "Lee, Katherine", "Roberts, Adam", "Zoph, Barret", "Zhou, Denny", "Wei, Jason", "Robinson, Kevin", "Mimno, David", "Ippolito, Daphne" ]
A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity
naacl-long.179
Poster
2305.13169
[ "" ]
https://huggingface.co/papers/2305.13169
1
3
0
11
1
[]
[ "togethercomputer/RedPajama-Data-V2", "ShivamPR21/RedPajama-Data-V2" ]
[]
https://aclanthology.org/2024.naacl-long.180.bib
https://aclanthology.org/2024.naacl-long.180/
@inproceedings{xu-etal-2024-instructional, title = "Instructional Fingerprinting of Large Language Models", author = "Xu, Jiashu and Wang, Fei and Ma, Mingyu and Koh, Pang Wei and Xiao, Chaowei and Chen, Muhao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.180", doi = "10.18653/v1/2024.naacl-long.180", pages = "3277--3306", abstract = "The exorbitant cost of training Large language models (LLMs) from scratch makes it essential to fingerprint the models to protect intellectual property via ownership authentication and to ensure downstream users and developers comply with their license terms (eg restricting commercial use). In this study, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that causes the LLM to generate specific text when the key is present. Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model. It also prevents publisher overclaim, maintains robustness against fingerprint guessing and parameter-efficient training, and supports multi-stage fingerprinting akin to MIT License.", }
The exorbitant cost of training Large language models (LLMs) from scratch makes it essential to fingerprint the models to protect intellectual property via ownership authentication and to ensure downstream users and developers comply with their license terms (eg restricting commercial use). In this study, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that causes the LLM to generate specific text when the key is present. Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model. It also prevents publisher overclaim, maintains robustness against fingerprint guessing and parameter-efficient training, and supports multi-stage fingerprinting akin to MIT License.
[ "Xu, Jiashu", "Wang, Fei", "Ma, Mingyu", "Koh, Pang Wei", "Xiao, Chaowei", "Chen, Muhao" ]
Instructional Fingerprinting of Large Language Models
naacl-long.180
Oral
2401.12255
[ "https://github.com/cnut1648/Model-Fingerprint" ]
https://huggingface.co/papers/2401.12255
2
1
0
6
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.181.bib
https://aclanthology.org/2024.naacl-long.181/
@inproceedings{salkhordeh-ziabari-etal-2024-reinforced, title = "Reinforced Multiple Instance Selection for Speaker Attribute Prediction", author = "Salkhordeh Ziabari, Alireza and Omrani, Ali and Hejabi, Parsa and Golazizian, Preni and Kennedy, Brendan and Piray, Payam and Dehghani, Morteza", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.181", doi = "10.18653/v1/2024.naacl-long.181", pages = "3307--3321", abstract = "Language usage is related to speaker age, gender, moral concerns, political ideology, and other attributes. Current state-of-the-art methods for predicting these attributes take a speaker{'}s utterances as input and provide a prediction per speaker attribute. Most of these approaches struggle to handle a large number of utterances per speaker. This difficulty is primarily due to the computational constraints of the models. Additionally, only a subset of speaker utterances may be relevant to specific attributes. In this paper, we formulate speaker attribute prediction as a Multiple Instance Learning (MIL) problem and propose RL-MIL, a novel approach based on Reinforcement Learning (RL) that effectively addresses both of these challenges. Our experiments demonstrate that our RL-based methodology consistently outperforms previous approaches across a range of related tasks: predicting speakers{'} psychographics and demographics from social media posts, and political ideologies from transcribed speeches. We create synthetic datasets and investigate the behavior of RL-MIL systematically. Our results show the success of RL-MIL in improving speaker attribute prediction by learning to select relevant speaker utterances.", }
Language usage is related to speaker age, gender, moral concerns, political ideology, and other attributes. Current state-of-the-art methods for predicting these attributes take a speaker{'}s utterances as input and provide a prediction per speaker attribute. Most of these approaches struggle to handle a large number of utterances per speaker. This difficulty is primarily due to the computational constraints of the models. Additionally, only a subset of speaker utterances may be relevant to specific attributes. In this paper, we formulate speaker attribute prediction as a Multiple Instance Learning (MIL) problem and propose RL-MIL, a novel approach based on Reinforcement Learning (RL) that effectively addresses both of these challenges. Our experiments demonstrate that our RL-based methodology consistently outperforms previous approaches across a range of related tasks: predicting speakers{'} psychographics and demographics from social media posts, and political ideologies from transcribed speeches. We create synthetic datasets and investigate the behavior of RL-MIL systematically. Our results show the success of RL-MIL in improving speaker attribute prediction by learning to select relevant speaker utterances.
[ "Salkhordeh Ziabari, Alireza", "Omrani, Ali", "Hejabi, Parsa", "Golazizian, Preni", "Kennedy, Brendan", "Piray, Payam", "Dehghani, Morteza" ]
Reinforced Multiple Instance Selection for Speaker Attribute Prediction
naacl-long.181
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.182.bib
https://aclanthology.org/2024.naacl-long.182/
@inproceedings{tuli-etal-2024-dynamo, title = "{D}yna{M}o: Accelerating Language Model Inference with Dynamic Multi-Token Sampling", author = "Tuli, Shikhar and Lin, Chi-Heng and Hsu, Yen-Chang and Jha, Niraj and Shen, Yilin and Jin, Hongxia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.182", doi = "10.18653/v1/2024.naacl-long.182", pages = "3322--3345", abstract = "Traditional language models operate autoregressively, i.e., they predict one token at a time. Rapid explosion in model sizes has resulted in high inference times. In this work, we propose DynaMo, a suite of multi-token prediction language models that reduce net inference times. Our models *dynamically* predict multiple tokens based on their confidence in the predicted joint probability distribution. We propose a lightweighttechnique to train these models, leveraging the weights of traditional autoregressive counterparts. Moreover, we propose novel ways to enhance the estimated joint probability to improve text generation quality, namely co-occurrence weighted masking and adaptive thresholding. We also propose systematic qualitative and quantitative methods to rigorously test the quality of generated text for non-autoregressive generation. One of the models in our suite, DynaMo-7.3B-T3, achieves same-quality generated text as the baseline (Pythia-6.9B) while achieving 2.57$\times$ speed-up with only 5.87{\%} and 2.67{\%} parameter and training time overheads, respectively.", }
Traditional language models operate autoregressively, i.e., they predict one token at a time. Rapid explosion in model sizes has resulted in high inference times. In this work, we propose DynaMo, a suite of multi-token prediction language models that reduce net inference times. Our models *dynamically* predict multiple tokens based on their confidence in the predicted joint probability distribution. We propose a lightweighttechnique to train these models, leveraging the weights of traditional autoregressive counterparts. Moreover, we propose novel ways to enhance the estimated joint probability to improve text generation quality, namely co-occurrence weighted masking and adaptive thresholding. We also propose systematic qualitative and quantitative methods to rigorously test the quality of generated text for non-autoregressive generation. One of the models in our suite, DynaMo-7.3B-T3, achieves same-quality generated text as the baseline (Pythia-6.9B) while achieving 2.57$\times$ speed-up with only 5.87{\%} and 2.67{\%} parameter and training time overheads, respectively.
[ "Tuli, Shikhar", "Lin, Chi-Heng", "Hsu, Yen-Chang", "Jha, Niraj", "Shen, Yilin", "Jin, Hongxia" ]
DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling
naacl-long.182
Poster
2405.00888
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.183.bib
https://aclanthology.org/2024.naacl-long.183/
@inproceedings{liu-etal-2024-shot, title = "Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation", author = "Liu, Haochen and Wang, Song and Chen, Chen and Li, Jundong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.183", doi = "10.18653/v1/2024.naacl-long.183", pages = "3346--3356", abstract = "Few-shot Knowledge Graph (KG) Relational Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs, given only several triplets of these relations as references (i.e., support triplets). This task has gained significant traction due to the widespread use of knowledge graphs in various natural language processing applications. Previous approaches have utilized meta-training methods and manually constructed meta-relation sets to tackle this task. Recent efforts have focused on edge-mask-based methods, which exploit the structure of the contextualized graphs of target triplets (i.e., a subgraph containing relevant triplets in the KG). However, existing edge-mask-based methods have limitations in extracting insufficient information from KG and are highly influenced by spurious information in KG. To overcome these challenges, we propose SAFER (Subgraph Adaptation for Few-shot Relational Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs generated from support and query triplets to perform the prediction. Specifically, SAFER enables the extraction of more comprehensive information from support triplets while minimizing the impact of spurious information when predicting query triplets. Experimental results on three prevalent datasets demonstrate the superiority of our proposed framework SAFER.", }
Few-shot Knowledge Graph (KG) Relational Reasoning aims to predict unseen triplets (i.e., query triplets) for rare relations in KGs, given only several triplets of these relations as references (i.e., support triplets). This task has gained significant traction due to the widespread use of knowledge graphs in various natural language processing applications. Previous approaches have utilized meta-training methods and manually constructed meta-relation sets to tackle this task. Recent efforts have focused on edge-mask-based methods, which exploit the structure of the contextualized graphs of target triplets (i.e., a subgraph containing relevant triplets in the KG). However, existing edge-mask-based methods have limitations in extracting insufficient information from KG and are highly influenced by spurious information in KG. To overcome these challenges, we propose SAFER (Subgraph Adaptation for Few-shot Relational Reasoning), a novel approach that effectively adapts the information in contextualized graphs to various subgraphs generated from support and query triplets to perform the prediction. Specifically, SAFER enables the extraction of more comprehensive information from support triplets while minimizing the impact of spurious information when predicting query triplets. Experimental results on three prevalent datasets demonstrate the superiority of our proposed framework SAFER.
[ "Liu, Haochen", "Wang, Song", "Chen, Chen", "Li, Jundong" ]
Few-shot Knowledge Graph Relational Reasoning via Subgraph Adaptation
naacl-long.183
Poster
2406.15507
[ "https://github.com/HaochenLiu2000/SAFER" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.184.bib
https://aclanthology.org/2024.naacl-long.184/
@inproceedings{ling-etal-2024-uncertainty, title = "Uncertainty Quantification for In-Context Learning of Large Language Models", author = "Ling, Chen and Zhao, Xujiang and Zhang, Xuchao and Cheng, Wei and Liu, Yanchi and Sun, Yiyou and Oishi, Mika and Osaki, Takao and Matsuda, Katsushi and Ji, Jie and Bai, Guangji and Zhao, Liang and Chen, Haifeng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.184", doi = "10.18653/v1/2024.naacl-long.184", pages = "3357--3370", abstract = "In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM{'}s response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM{'}s response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model{'}s configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ{\_}ICL.", }
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM{'}s response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM{'}s response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model{'}s configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ{\_}ICL.
[ "Ling, Chen", "Zhao, Xujiang", "Zhang, Xuchao", "Cheng, Wei", "Liu, Yanchi", "Sun, Yiyou", "Oishi, Mika", "Osaki, Takao", "Matsuda, Katsushi", "Ji, Jie", "Bai, Guangji", "Zhao, Liang", "Chen, Haifeng" ]
Uncertainty Quantification for In-Context Learning of Large Language Models
naacl-long.184
Poster
2402.10189
[ "https://github.com/lingchen0331/uq_icl" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.185.bib
https://aclanthology.org/2024.naacl-long.185/
@inproceedings{wang-etal-2024-helpsteer, title = "{H}elp{S}teer: Multi-attribute Helpfulness Dataset for {S}teer{LM}", author = "Wang, Zhilin and Dong, Yi and Zeng, Jiaqi and Adams, Virginia and Sreedhar, Makesh Narsimhan and Egert, Daniel and Delalleau, Olivier and Scowcroft, Jane and Kant, Neel and Swope, Aidan and Kuchaiev, Oleksii", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.185", doi = "10.18653/v1/2024.naacl-long.185", pages = "3371--3384", abstract = "Existing open-source helpfulness preference datasets do not specify what makes some responses more helpful and others less so. Models trained on these datasets can incidentally learn to model dataset artifacts (e.g. preferring longer but unhelpful responses only due to their length). To alleviate this problem, we collect HelpSteer, a multi-attribute helpfulness dataset annotated for the various aspects that make responses helpful. Specifically, our 37k-sample dataset has annotations for correctness, coherence, complexity, and verbosity in addition to overall helpfulness of responses. Training Llama 2 70B using the HelpSteer dataset with SteerLM technique produces a model that scores 7.54 on MT Bench, which is currently the highest score for open models that do not require training data from more powerful models (e.g. GPT-4). We release this dataset with CC-BY-4.0 license at https://huggingface.co/datasets/nvidia/HelpSteer", }
Existing open-source helpfulness preference datasets do not specify what makes some responses more helpful and others less so. Models trained on these datasets can incidentally learn to model dataset artifacts (e.g. preferring longer but unhelpful responses only due to their length). To alleviate this problem, we collect HelpSteer, a multi-attribute helpfulness dataset annotated for the various aspects that make responses helpful. Specifically, our 37k-sample dataset has annotations for correctness, coherence, complexity, and verbosity in addition to overall helpfulness of responses. Training Llama 2 70B using the HelpSteer dataset with SteerLM technique produces a model that scores 7.54 on MT Bench, which is currently the highest score for open models that do not require training data from more powerful models (e.g. GPT-4). We release this dataset with CC-BY-4.0 license at https://huggingface.co/datasets/nvidia/HelpSteer
[ "Wang, Zhilin", "Dong, Yi", "Zeng, Jiaqi", "Adams, Virginia", "Sreedhar, Makesh Narsimhan", "Egert, Daniel", "Delalleau, Olivier", "Scowcroft, Jane", "Kant, Neel", "Swope, Aidan", "Kuchaiev, Oleksii" ]
HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM
naacl-long.185
Poster
2311.09528
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.186.bib
https://aclanthology.org/2024.naacl-long.186/
@inproceedings{zhu-etal-2024-preference, title = "A Preference-driven Paradigm for Enhanced Translation with Large Language Models", author = "Zhu, Dawei and Trenous, Sony and Shen, Xiaoyu and Klakow, Dietrich and Byrne, Bill and Hasler, Eva", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.186", doi = "10.18653/v1/2024.naacl-long.186", pages = "3385--3403", abstract = "Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine-tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once the LLMs have achieved a certain level of translation capability, and further increasing the size of parallel data does not provide additional benefits. To overcome this plateau associated with imitation-based SFT, we propose a preference-based approach built upon the Plackett-Luce model. The objective is to steer LLMs towards a more nuanced understanding of translation preferences from a holistic view, while also being more resilient in the absence of gold translations. We further build a dataset named MAPLE to verify the effectiveness of our approach, which includes multiple translations of varying quality for each source sentence. Extensive experiments demonstrate the superiority of our approach in {``}breaking the plateau{''} across diverse LLMs and test settings. Our in-depth analysis underscores the pivotal role of diverse translations and accurate preference scores in the success of our approach.", }
Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine-tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once the LLMs have achieved a certain level of translation capability, and further increasing the size of parallel data does not provide additional benefits. To overcome this plateau associated with imitation-based SFT, we propose a preference-based approach built upon the Plackett-Luce model. The objective is to steer LLMs towards a more nuanced understanding of translation preferences from a holistic view, while also being more resilient in the absence of gold translations. We further build a dataset named MAPLE to verify the effectiveness of our approach, which includes multiple translations of varying quality for each source sentence. Extensive experiments demonstrate the superiority of our approach in {``}breaking the plateau{''} across diverse LLMs and test settings. Our in-depth analysis underscores the pivotal role of diverse translations and accurate preference scores in the success of our approach.
[ "Zhu, Dawei", "Trenous, Sony", "Shen, Xiaoyu", "Klakow, Dietrich", "Byrne, Bill", "Hasler, Eva" ]
A Preference-driven Paradigm for Enhanced Translation with Large Language Models
naacl-long.186
Oral
2404.11288
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.187.bib
https://aclanthology.org/2024.naacl-long.187/
@inproceedings{zhang-etal-2024-fair, title = "Fair Abstractive Summarization of Diverse Perspectives", author = "Zhang, Yusen and Zhang, Nan and Liu, Yixin and Fabbri, Alexander and Liu, Junru and Kamoi, Ryo and Lu, Xiaoxin and Xiong, Caiming and Zhao, Jieyu and Radev, Dragomir and McKeown, Kathleen and Zhang, Rui", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.187", doi = "10.18653/v1/2024.naacl-long.187", pages = "3404--3426", abstract = "People from different social and demographic groups express diverse perspectives and conflicting opinions on a broad set of topics such as product reviews, healthcare, law, and politics. A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups. However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization. In this paper, we systematically investigate fair abstractive summarization for user-generated data. We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people, and we propose four reference-free automatic metrics by measuring the differences between target and source perspectives. We evaluate nine LLMs, including three GPT models, four LLaMA models, PaLM 2, and Claude, on six datasets collected from social media, online reviews, and recorded transcripts. Experiments show that both the model-generated and the human-written reference summaries suffer from low fairness. We conduct a comprehensive analysis of the common factors influencing fairness and propose three simple but effective methods to alleviate unfair summarization. Our dataset and code are available at https://github.com/psunlpgroup/FairSumm.", }
People from different social and demographic groups express diverse perspectives and conflicting opinions on a broad set of topics such as product reviews, healthcare, law, and politics. A fair summary should provide a comprehensive coverage of diverse perspectives without underrepresenting certain groups. However, current work in summarization metrics and Large Language Models (LLMs) evaluation has not explored fair abstractive summarization. In this paper, we systematically investigate fair abstractive summarization for user-generated data. We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people, and we propose four reference-free automatic metrics by measuring the differences between target and source perspectives. We evaluate nine LLMs, including three GPT models, four LLaMA models, PaLM 2, and Claude, on six datasets collected from social media, online reviews, and recorded transcripts. Experiments show that both the model-generated and the human-written reference summaries suffer from low fairness. We conduct a comprehensive analysis of the common factors influencing fairness and propose three simple but effective methods to alleviate unfair summarization. Our dataset and code are available at https://github.com/psunlpgroup/FairSumm.
[ "Zhang, Yusen", "Zhang, Nan", "Liu, Yixin", "Fabbri, Alex", "er", "Liu, Junru", "Kamoi, Ryo", "Lu, Xiaoxin", "Xiong, Caiming", "Zhao, Jieyu", "Radev, Dragomir", "McKeown, Kathleen", "Zhang, Rui" ]
Fair Abstractive Summarization of Diverse Perspectives
naacl-long.187
Poster
2311.07884
[ "https://github.com/psunlpgroup/fairsumm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.188.bib
https://aclanthology.org/2024.naacl-long.188/
@inproceedings{tiong-etal-2024-measuring, title = "What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases", author = "Tiong, Anthony and Zhao, Junqi and Li, Boyang and Li, Junnan and Hoi, Steven and Xiong, Caiming", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.188", doi = "10.18653/v1/2024.naacl-long.188", pages = "3427--3454", abstract = "Vision-language (VL) models, pretrained on colossal image-text datasets, have attained broad VL competence that is difficult to evaluate. A common belief is that a small number of VL skills underlie the variety of VL tests. In this paper, we perform a large-scale transfer learning experiment aimed at discovering latent VL skills from data. We reveal interesting characteristics that have important implications for test suite design. First, generation tasks suffer from a length bias, suggesting benchmarks should balance tasks with varying output lengths. Second, we demonstrate that factor analysis successfully identifies reasonable yet surprising VL skill factors, suggesting benchmarks could leverage similar analyses for task selection.Finally, we present a new dataset, OLIVE$^1$, which simulates user instructions in the wild and presents challenges dissimilar to all datasets we tested. Our findings contribute to the design of balanced and broad-coverage vision-language evaluation methods. $^1$https://github.com/jq-zh/olive-dataset", }
Vision-language (VL) models, pretrained on colossal image-text datasets, have attained broad VL competence that is difficult to evaluate. A common belief is that a small number of VL skills underlie the variety of VL tests. In this paper, we perform a large-scale transfer learning experiment aimed at discovering latent VL skills from data. We reveal interesting characteristics that have important implications for test suite design. First, generation tasks suffer from a length bias, suggesting benchmarks should balance tasks with varying output lengths. Second, we demonstrate that factor analysis successfully identifies reasonable yet surprising VL skill factors, suggesting benchmarks could leverage similar analyses for task selection.Finally, we present a new dataset, OLIVE$^1$, which simulates user instructions in the wild and presents challenges dissimilar to all datasets we tested. Our findings contribute to the design of balanced and broad-coverage vision-language evaluation methods. $^1$https://github.com/jq-zh/olive-dataset
[ "Tiong, Anthony", "Zhao, Junqi", "Li, Boyang", "Li, Junnan", "Hoi, Steven", "Xiong, Caiming" ]
What Are We Measuring When We Evaluate Large Vision-Language Models? An Analysis of Latent Factors and Biases
naacl-long.188
Poster
2404.02415
[ "https://github.com/jq-zh/olive-dataset" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.189.bib
https://aclanthology.org/2024.naacl-long.189/
@inproceedings{lourie-etal-2024-show, title = "Show Your Work with Confidence: Confidence Bands for Tuning Curves", author = "Lourie, Nicholas and Cho, Kyunghyun and He, He", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.189", doi = "10.18653/v1/2024.naacl-long.189", pages = "3455--3472", abstract = "The choice of hyperparameters greatly impacts performance in natural language processing. Often, it is hard to tell if a method is better than another or just better tuned. *Tuning curves* fix this ambiguity by accounting for tuning effort. Specifically, they plot validation performance as a function of the number of hyperparameter choices tried so far. While several estimators exist for these curves, it is common to use point estimates, which we show fail silently and give contradictory results when given too little data.Beyond point estimates, *confidence bands* are necessary to rigorously establish the relationship between different approaches. We present the first method to construct valid confidence bands for tuning curves. The bands are exact, simultaneous, and distribution-free, thus they provide a robust basis for comparing methods.Empirical analysis shows that while bootstrap confidence bands, which serve as a baseline, fail to approximate their target confidence, ours achieve it exactly. We validate our design with ablations, analyze the effect of sample size, and provide guidance on comparing models with our method. To promote confident comparisons in future work, we release opda: an easy-to-use library that you can install with pip. https://github.com/nicholaslourie/opda", }
The choice of hyperparameters greatly impacts performance in natural language processing. Often, it is hard to tell if a method is better than another or just better tuned. *Tuning curves* fix this ambiguity by accounting for tuning effort. Specifically, they plot validation performance as a function of the number of hyperparameter choices tried so far. While several estimators exist for these curves, it is common to use point estimates, which we show fail silently and give contradictory results when given too little data.Beyond point estimates, *confidence bands* are necessary to rigorously establish the relationship between different approaches. We present the first method to construct valid confidence bands for tuning curves. The bands are exact, simultaneous, and distribution-free, thus they provide a robust basis for comparing methods.Empirical analysis shows that while bootstrap confidence bands, which serve as a baseline, fail to approximate their target confidence, ours achieve it exactly. We validate our design with ablations, analyze the effect of sample size, and provide guidance on comparing models with our method. To promote confident comparisons in future work, we release opda: an easy-to-use library that you can install with pip. https://github.com/nicholaslourie/opda
[ "Lourie, Nicholas", "Cho, Kyunghyun", "He, He" ]
Show Your Work with Confidence: Confidence Bands for Tuning Curves
naacl-long.189
Poster
2311.09480
[ "https://github.com/nicholaslourie/opda" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.190.bib
https://aclanthology.org/2024.naacl-long.190/
@inproceedings{prabhakaran-etal-2024-grasp, title = "{GRASP}: A Disagreement Analysis Framework to Assess Group Associations in Perspectives", author = "Prabhakaran, Vinodkumar and Homan, Christopher and Aroyo, Lora and Mostafazadeh Davani, Aida and Parrish, Alicia and Taylor, Alex and Diaz, Mark and Wang, Ding and Serapio-Garc{\'\i}a, Gregory", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.190", doi = "10.18653/v1/2024.naacl-long.190", pages = "3473--3492", abstract = "Human annotation plays a core role in machine learning {---} annotations for supervised models, safety guardrails for generative models, and human feedback for reinforcement learning, to cite a few avenues. However, the fact that many of these human annotations are inherently subjective is often overlooked. Recent work has demonstrated that ignoring rater subjectivity (typically resulting in rater disagreement) is problematic within specific tasks and for specific subgroups. Generalizable methods to harness rater disagreement and thus understand the socio-cultural leanings of subjective tasks remain elusive. In this paper, we propose GRASP, a comprehensive disagreement analysis framework to measure group association in perspectives among different rater subgroups, and demonstrate its utility in assessing the extent of systematic disagreements in two datasets: (1) safety annotations of human-chatbot conversations, and (2) offensiveness annotations of social media posts, both annotated by diverse rater pools across different socio-demographic axes. Our framework (based on disagreement metrics) reveals specific rater groups that have significantly different perspectives than others on certain tasks, and helps identify demographic axes that are crucial to consider in specific task contexts.", }
Human annotation plays a core role in machine learning {---} annotations for supervised models, safety guardrails for generative models, and human feedback for reinforcement learning, to cite a few avenues. However, the fact that many of these human annotations are inherently subjective is often overlooked. Recent work has demonstrated that ignoring rater subjectivity (typically resulting in rater disagreement) is problematic within specific tasks and for specific subgroups. Generalizable methods to harness rater disagreement and thus understand the socio-cultural leanings of subjective tasks remain elusive. In this paper, we propose GRASP, a comprehensive disagreement analysis framework to measure group association in perspectives among different rater subgroups, and demonstrate its utility in assessing the extent of systematic disagreements in two datasets: (1) safety annotations of human-chatbot conversations, and (2) offensiveness annotations of social media posts, both annotated by diverse rater pools across different socio-demographic axes. Our framework (based on disagreement metrics) reveals specific rater groups that have significantly different perspectives than others on certain tasks, and helps identify demographic axes that are crucial to consider in specific task contexts.
[ "Prabhakaran, Vinodkumar", "Homan, Christopher", "Aroyo, Lora", "Mostafazadeh Davani, Aida", "Parrish, Alicia", "Taylor, Alex", "Diaz, Mark", "Wang, Ding", "Serapio-Garc{\\'\\i}a, Gregory" ]
GRASP: A Disagreement Analysis Framework to Assess Group Associations in Perspectives
naacl-long.190
Poster
2311.05074
[ "" ]
https://huggingface.co/papers/2311.05074
2
0
0
7
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.191.bib
https://aclanthology.org/2024.naacl-long.191/
@inproceedings{sun-etal-2024-event, title = "Event Causality Is Key to Computational Story Understanding", author = "Sun, Yidan and Chao, Qin and Li, Boyang", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.191", doi = "10.18653/v1/2024.naacl-long.191", pages = "3493--3511", abstract = "Cognitive science and symbolic AI research suggest that event causality provides vital information for story understanding. However, machine learning systems for story understanding rarely employ event causality, partially due to the lack of methods that reliably identify open-world causal event relations. Leveraging recent progress in large language models, we present the first method for event causality identification that leads to material improvements in computational story understanding. Our technique sets a new state of the art on the COPES dataset (Wang et al., 2023c) for causal event relation identification. Further, in the downstream story quality evaluation task, the identified causal relations lead to 3.6-16.6{\%} relative improvement on correlation with human ratings. In the multimodal story video-text alignment task, we attain 4.1-10.9{\%} increase on Clip Accuracy and 4.2-13.5{\%} increase on Sentence IoU. The findings indicate substantial untapped potential for event causality in computational story understanding. The codebase is at https://github.com/insundaycathy/Event-Causality-Extraction.", }
Cognitive science and symbolic AI research suggest that event causality provides vital information for story understanding. However, machine learning systems for story understanding rarely employ event causality, partially due to the lack of methods that reliably identify open-world causal event relations. Leveraging recent progress in large language models, we present the first method for event causality identification that leads to material improvements in computational story understanding. Our technique sets a new state of the art on the COPES dataset (Wang et al., 2023c) for causal event relation identification. Further, in the downstream story quality evaluation task, the identified causal relations lead to 3.6-16.6{\%} relative improvement on correlation with human ratings. In the multimodal story video-text alignment task, we attain 4.1-10.9{\%} increase on Clip Accuracy and 4.2-13.5{\%} increase on Sentence IoU. The findings indicate substantial untapped potential for event causality in computational story understanding. The codebase is at https://github.com/insundaycathy/Event-Causality-Extraction.
[ "Sun, Yidan", "Chao, Qin", "Li, Boyang" ]
Event Causality Is Key to Computational Story Understanding
naacl-long.191
Oral
2311.09648
[ "https://github.com/insundaycathy/event-causality-extraction" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.192.bib
https://aclanthology.org/2024.naacl-long.192/
@inproceedings{ishibashi-etal-2024-subspace, title = "Subspace Representations for Soft Set Operations and Sentence Similarities", author = "Ishibashi, Yoichi and Yokoi, Sho and Sudoh, Katsuhito and Nakamura, Satoshi", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.192", doi = "10.18653/v1/2024.naacl-long.192", pages = "3512--3524", abstract = "In the field of natural language processing (NLP), continuous vector representations are crucial for capturing the semantic meanings of individual words. Yet, when it comes to the representations of sets of words, the conventional vector-based approaches often struggle with expressiveness and lack the essential set operations such as union, intersection, and complement. Inspired by quantum logic, we realize the representation of word sets and corresponding set operations within pre-trained word embedding spaces. By grounding our approach in the linear subspaces, we enable efficient computation of various set operations and facilitate the soft computation of membership functions within continuous spaces. Moreover, we allow for the computation of the F-score directly within word vectors, thereby establishing a direct link to the assessment of sentence similarity. In experiments with widely-used pre-trained embeddings and benchmarks, we show that our subspace-based set operations consistently outperform vector-based ones in both sentence similarity and set retrieval tasks.", }
In the field of natural language processing (NLP), continuous vector representations are crucial for capturing the semantic meanings of individual words. Yet, when it comes to the representations of sets of words, the conventional vector-based approaches often struggle with expressiveness and lack the essential set operations such as union, intersection, and complement. Inspired by quantum logic, we realize the representation of word sets and corresponding set operations within pre-trained word embedding spaces. By grounding our approach in the linear subspaces, we enable efficient computation of various set operations and facilitate the soft computation of membership functions within continuous spaces. Moreover, we allow for the computation of the F-score directly within word vectors, thereby establishing a direct link to the assessment of sentence similarity. In experiments with widely-used pre-trained embeddings and benchmarks, we show that our subspace-based set operations consistently outperform vector-based ones in both sentence similarity and set retrieval tasks.
[ "Ishibashi, Yoichi", "Yokoi, Sho", "Sudoh, Katsuhito", "Nakamura, Satoshi" ]
Subspace Representations for Soft Set Operations and Sentence Similarities
naacl-long.192
Poster
2210.13034
[ "https://github.com/yoichi1484/subspace" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.193.bib
https://aclanthology.org/2024.naacl-long.193/
@inproceedings{zhuang-etal-2024-heart, title = "My Heart Skipped a Beat! Recognizing Expressions of Embodied Emotion in Natural Language", author = "Zhuang, Yuan and Jiang, Tianyu and Riloff, Ellen", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.193", doi = "10.18653/v1/2024.naacl-long.193", pages = "3525--3537", abstract = "Humans frequently experience emotions. When emotions arise, they affect not only our mental state but can also change our physical state. For example, we often open our eyes wide when we are surprised, or clap our hands when we feel excited. Physical manifestations of emotions are referred to as embodied emotion in the psychology literature. From an NLP perspective, recognizing descriptions of physical movements or physiological responses associated with emotions is a type of implicit emotion recognition. Our work introduces a new task of recognizing expressions of embodied emotion in natural language. We create a dataset of sentences that contains 7,300 body part mentions with human annotations for embodied emotion. We develop a classification model for this task and present two methods to acquire weakly labeled instances of embodied emotion by extracting emotional manner expressions and by prompting a language model. Our experiments show that the weakly labeled data can train an effective classification model without gold data, and can also improve performance when combined with gold data. Our dataset is publicly available at https://github.com/yyzhuang1991/Embodied-Emotions.", }
Humans frequently experience emotions. When emotions arise, they affect not only our mental state but can also change our physical state. For example, we often open our eyes wide when we are surprised, or clap our hands when we feel excited. Physical manifestations of emotions are referred to as embodied emotion in the psychology literature. From an NLP perspective, recognizing descriptions of physical movements or physiological responses associated with emotions is a type of implicit emotion recognition. Our work introduces a new task of recognizing expressions of embodied emotion in natural language. We create a dataset of sentences that contains 7,300 body part mentions with human annotations for embodied emotion. We develop a classification model for this task and present two methods to acquire weakly labeled instances of embodied emotion by extracting emotional manner expressions and by prompting a language model. Our experiments show that the weakly labeled data can train an effective classification model without gold data, and can also improve performance when combined with gold data. Our dataset is publicly available at https://github.com/yyzhuang1991/Embodied-Emotions.
[ "Zhuang, Yuan", "Jiang, Tianyu", "Riloff, Ellen" ]
My Heart Skipped a Beat! Recognizing Expressions of Embodied Emotion in Natural Language
naacl-long.193
Oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.194.bib
https://aclanthology.org/2024.naacl-long.194/
@inproceedings{cai-etal-2024-low, title = "Low-Cost Generation and Evaluation of Dictionary Example Sentences", author = "Cai, Bill and Clarence, Ng and Liang, Daniel and Hotama, Shelvia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.194", doi = "10.18653/v1/2024.naacl-long.194", pages = "3538--3549", abstract = "Dictionary example sentences play an important role in illustrating word definitions and usage, but manually creating quality sentences is challenging. Prior works have demonstrated that language models can be trained to generate example sentences. However, they relied on costly customized models and word sense datasets for generation and evaluation of their work. Rapid advancements in foundational models present the opportunity to create low-cost, zero-shot methods for the generation and evaluation of dictionary example sentences. We introduce a new automatic evaluation metric called OxfordEval that measures the win-rate of generated sentences against existing Oxford Dictionary sentences. OxfordEval shows high alignment with human judgments, enabling large-scale automated quality evaluation. We experiment with various LLMs and configurations to generate dictionary sentences across word classes. We complement this with a novel approach of using masked language models to identify and select sentences that best exemplify word meaning. The eventual model, FM-MLM, achieves over 85.1{\%} win rate against Oxford baseline sentences according to OxfordEval, compared to 39.8{\%} win rate for prior model-generated sentences.", }
Dictionary example sentences play an important role in illustrating word definitions and usage, but manually creating quality sentences is challenging. Prior works have demonstrated that language models can be trained to generate example sentences. However, they relied on costly customized models and word sense datasets for generation and evaluation of their work. Rapid advancements in foundational models present the opportunity to create low-cost, zero-shot methods for the generation and evaluation of dictionary example sentences. We introduce a new automatic evaluation metric called OxfordEval that measures the win-rate of generated sentences against existing Oxford Dictionary sentences. OxfordEval shows high alignment with human judgments, enabling large-scale automated quality evaluation. We experiment with various LLMs and configurations to generate dictionary sentences across word classes. We complement this with a novel approach of using masked language models to identify and select sentences that best exemplify word meaning. The eventual model, FM-MLM, achieves over 85.1{\%} win rate against Oxford baseline sentences according to OxfordEval, compared to 39.8{\%} win rate for prior model-generated sentences.
[ "Cai, Bill", "Clarence, Ng", "Liang, Daniel", "Hotama, Shelvia" ]
Low-Cost Generation and Evaluation of Dictionary Example Sentences
naacl-long.194
Poster
2404.06224
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.195.bib
https://aclanthology.org/2024.naacl-long.195/
@inproceedings{qiao-etal-2024-making, title = "Making Language Models Better Tool Learners with Execution Feedback", author = "Qiao, Shuofei and Gui, Honghao and Lv, Chengfei and Jia, Qianghuai and Chen, Huajun and Zhang, Ningyu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.195", doi = "10.18653/v1/2024.naacl-long.195", pages = "3550--3568", abstract = "Tools serve as pivotal interfaces that enable humans to understand and reshape the environment. With the advent of foundation models, AI systems can utilize tools to expand their capabilities and interact with the real world. Existing tool learning methodologies, encompassing supervised fine-tuning and prompt engineering approaches, often induce large language models to utilize tools indiscriminately, as complex tasks often exceed their own competencies. However, introducing tools for simple tasks, which the models themselves can readily resolve, can inadvertently propagate errors rather than enhance performance. This leads to the research question: can we teach language models when and how to use tools? To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Experimental results, backed by further analysis, show that TRICE can make the large language model selectively use tools by improving the accuracy of tool usage while enhancing insufficient tool learning and mitigating excessive reliance on tools.", }
Tools serve as pivotal interfaces that enable humans to understand and reshape the environment. With the advent of foundation models, AI systems can utilize tools to expand their capabilities and interact with the real world. Existing tool learning methodologies, encompassing supervised fine-tuning and prompt engineering approaches, often induce large language models to utilize tools indiscriminately, as complex tasks often exceed their own competencies. However, introducing tools for simple tasks, which the models themselves can readily resolve, can inadvertently propagate errors rather than enhance performance. This leads to the research question: can we teach language models when and how to use tools? To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Experimental results, backed by further analysis, show that TRICE can make the large language model selectively use tools by improving the accuracy of tool usage while enhancing insufficient tool learning and mitigating excessive reliance on tools.
[ "Qiao, Shuofei", "Gui, Honghao", "Lv, Chengfei", "Jia, Qianghuai", "Chen, Huajun", "Zhang, Ningyu" ]
Making Language Models Better Tool Learners with Execution Feedback
naacl-long.195
Poster
2305.13068
[ "https://github.com/zjunlp/trice" ]
https://huggingface.co/papers/2305.13068
2
0
0
4
1
[]
[]
[]
https://aclanthology.org/2024.naacl-long.196.bib
https://aclanthology.org/2024.naacl-long.196/
@inproceedings{chen-etal-2024-complex, title = "Complex Claim Verification with Evidence Retrieved in the Wild", author = "Chen, Jifan and Kim, Grace and Sriram, Aniruddh and Durrett, Greg and Choi, Eunsol", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.196", doi = "10.18653/v1/2024.naacl-long.196", pages = "3569--3587", abstract = "Retrieving evidence to support or refute claims is a core part of automatic fact-checking. Prior work makes simplifying assumptions in retrieval that depart from real-world use cases: either no access to evidence, access to evidence curated by a human fact-checker, or access to evidence published after a claim was made. In this work, we present the first realistic pipeline to check real-world claims by retrieving raw evidence from the web. We restrict our retriever to only search documents available prior to the claim{'}s making, modeling the realistic scenario of emerging claims. Our pipeline includes five components: claim decomposition, raw document retrieval, fine-grained evidence retrieval, claim-focused summarization, and veracity judgment. We conduct experiments on complex political claims in the ClaimDecomp dataset and show that the aggregated evidence produced by our pipeline improves veracity judgments. Human evaluation finds the evidence summary produced by our system is reliable (it does not hallucinate information) and relevant to answering key questions about a claim, suggesting that it can assist fact-checkers even when it does not reflect a complete evidence set.", }
Retrieving evidence to support or refute claims is a core part of automatic fact-checking. Prior work makes simplifying assumptions in retrieval that depart from real-world use cases: either no access to evidence, access to evidence curated by a human fact-checker, or access to evidence published after a claim was made. In this work, we present the first realistic pipeline to check real-world claims by retrieving raw evidence from the web. We restrict our retriever to only search documents available prior to the claim{'}s making, modeling the realistic scenario of emerging claims. Our pipeline includes five components: claim decomposition, raw document retrieval, fine-grained evidence retrieval, claim-focused summarization, and veracity judgment. We conduct experiments on complex political claims in the ClaimDecomp dataset and show that the aggregated evidence produced by our pipeline improves veracity judgments. Human evaluation finds the evidence summary produced by our system is reliable (it does not hallucinate information) and relevant to answering key questions about a claim, suggesting that it can assist fact-checkers even when it does not reflect a complete evidence set.
[ "Chen, Jifan", "Kim, Grace", "Sriram, Aniruddh", "Durrett, Greg", "Choi, Eunsol" ]
Complex Claim Verification with Evidence Retrieved in the Wild
naacl-long.196
Oral
2305.11859
[ "https://github.com/jifan-chen/fact-checking-via-raw-evidence" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.197.bib
https://aclanthology.org/2024.naacl-long.197/
@inproceedings{wu-etal-2024-multimodal, title = "Multimodal Multi-loss Fusion Network for Sentiment Analysis", author = "Wu, Zehui and Gong, Ziwei and Koo, Jaywon and Hirschberg, Julia", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.197", doi = "10.18653/v1/2024.naacl-long.197", pages = "3588--3602", abstract = "This paper investigates the optimal selection and fusion of feature encoders across multiple modalities and combines these in one neural network to improve sentiment detection. We compare different fusion methods and examine the impact of multi-loss training within the multi-modality fusion network, identifying surprisingly important findings relating to subnet performance. We have also found that integrating context significantly enhances model performance. Our best model achieves state-of-the-art performance for three datasets (CMU-MOSI, CMU-MOSEI and CH-SIMS). These results suggest a roadmap toward an optimized feature selection and fusion approach for enhancing sentiment detection in neural networks.", }
This paper investigates the optimal selection and fusion of feature encoders across multiple modalities and combines these in one neural network to improve sentiment detection. We compare different fusion methods and examine the impact of multi-loss training within the multi-modality fusion network, identifying surprisingly important findings relating to subnet performance. We have also found that integrating context significantly enhances model performance. Our best model achieves state-of-the-art performance for three datasets (CMU-MOSI, CMU-MOSEI and CH-SIMS). These results suggest a roadmap toward an optimized feature selection and fusion approach for enhancing sentiment detection in neural networks.
[ "Wu, Zehui", "Gong, Ziwei", "Koo, Jaywon", "Hirschberg, Julia" ]
Multimodal Multi-loss Fusion Network for Sentiment Analysis
naacl-long.197
Oral
2308.00264
[ "https://github.com/zehuiwu/MMML" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.198.bib
https://aclanthology.org/2024.naacl-long.198/
@inproceedings{liu-etal-2024-confronting, title = "Confronting {LLM}s with Traditional {ML}: Rethinking the Fairness of Large Language Models in Tabular Classifications", author = "Liu, Yanchen and Gautam, Srishti and Ma, Jiaqi and Lakkaraju, Himabindu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.198", doi = "10.18653/v1/2024.naacl-long.198", pages = "3603--3620", abstract = "Recent literature has suggested the potential of using large language models (LLMs) to make classifications for tabular tasks. However, LLMs have been shown to exhibit harmful social biases that reflect the stereotypes and inequalities present in society. To this end, as well as the widespread use of tabular data in many high-stake applications, it is important to explore the following questions: what sources of information do LLMs draw upon when making classifications for tabular tasks; whether and to what extent are LLM classifications for tabular data influenced by social biases and stereotypes; and what are the consequential implications for fairness?Through a series of experiments, we delve into these questions and show that LLMs tend to inherit social biases from their training data which significantly impact their fairness in tabular classification tasks. Furthermore, our investigations show that in the context of bias mitigation, though in-context learning and finetuning have a moderate effect, the fairness metric gap between different subgroups is still larger than that in traditional machine learning models, such as Random Forest and shallow Neural Networks. This observation emphasizes that the social biases are inherent within the LLMs themselves and inherited from their pretraining corpus, not only from the downstream task datasets. Besides, we demonstrate that label-flipping of in-context examples can significantly reduce biases, further highlighting the presence of inherent bias within LLMs.", }
Recent literature has suggested the potential of using large language models (LLMs) to make classifications for tabular tasks. However, LLMs have been shown to exhibit harmful social biases that reflect the stereotypes and inequalities present in society. To this end, as well as the widespread use of tabular data in many high-stake applications, it is important to explore the following questions: what sources of information do LLMs draw upon when making classifications for tabular tasks; whether and to what extent are LLM classifications for tabular data influenced by social biases and stereotypes; and what are the consequential implications for fairness?Through a series of experiments, we delve into these questions and show that LLMs tend to inherit social biases from their training data which significantly impact their fairness in tabular classification tasks. Furthermore, our investigations show that in the context of bias mitigation, though in-context learning and finetuning have a moderate effect, the fairness metric gap between different subgroups is still larger than that in traditional machine learning models, such as Random Forest and shallow Neural Networks. This observation emphasizes that the social biases are inherent within the LLMs themselves and inherited from their pretraining corpus, not only from the downstream task datasets. Besides, we demonstrate that label-flipping of in-context examples can significantly reduce biases, further highlighting the presence of inherent bias within LLMs.
[ "Liu, Yanchen", "Gautam, Srishti", "Ma, Jiaqi", "Lakkaraju, Himabindu" ]
Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications
naacl-long.198
Oral
2310.14607
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.199.bib
https://aclanthology.org/2024.naacl-long.199/
@inproceedings{sengupta-etal-2024-analyzing, title = "Analyzing the Use of Metaphors in News Editorials for Political Framing", author = "Sengupta, Meghdut and El Baff, Roxanne and Alshomary, Milad and Wachsmuth, Henning", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.199", doi = "10.18653/v1/2024.naacl-long.199", pages = "3621--3631", abstract = "Metaphorical language is a pivotal element inthe realm of political framing. Existing workfrom linguistics and the social sciences providescompelling evidence regarding the distinctivenessof conceptual framing for politicalideology perspectives. However, the nature andutilization of metaphors and the effect on audiencesof different political ideologies withinpolitical discourses are hardly explored. Toenable research in this direction, in this workwe create a dataset, originally based on newseditorials and labeled with their persuasive effectson liberals and conservatives and extend itwith annotations pertaining to metaphorical usageof language. To that end, first, we identifyall single metaphors and composite metaphors.Secondly, we provide annotations of the sourceand target domains for each metaphor. As aresult, our corpus consists of 300 news editorialsannotated with spans of texts containingmetaphors and the corresponding domains ofwhich these metaphors draw from. Our analysisshows that liberal readers are affected bymetaphors, whereas conservatives are resistantto them. Both ideologies are affected differentlybased on the metaphor source and targetcategory. For example, liberals are affected bymetaphors in the Darkness {\&} Light (e.g., death)source domains, where as the source domain ofNature affects conservatives more significantly.", }
Metaphorical language is a pivotal element inthe realm of political framing. Existing workfrom linguistics and the social sciences providescompelling evidence regarding the distinctivenessof conceptual framing for politicalideology perspectives. However, the nature andutilization of metaphors and the effect on audiencesof different political ideologies withinpolitical discourses are hardly explored. Toenable research in this direction, in this workwe create a dataset, originally based on newseditorials and labeled with their persuasive effectson liberals and conservatives and extend itwith annotations pertaining to metaphorical usageof language. To that end, first, we identifyall single metaphors and composite metaphors.Secondly, we provide annotations of the sourceand target domains for each metaphor. As aresult, our corpus consists of 300 news editorialsannotated with spans of texts containingmetaphors and the corresponding domains ofwhich these metaphors draw from. Our analysisshows that liberal readers are affected bymetaphors, whereas conservatives are resistantto them. Both ideologies are affected differentlybased on the metaphor source and targetcategory. For example, liberals are affected bymetaphors in the Darkness {\&} Light (e.g., death)source domains, where as the source domain ofNature affects conservatives more significantly.
[ "Sengupta, Meghdut", "El Baff, Roxanne", "Alshomary, Milad", "Wachsmuth, Henning" ]
Analyzing the Use of Metaphors in News Editorials for Political Framing
naacl-long.199
Oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-long.200.bib
https://aclanthology.org/2024.naacl-long.200/
@inproceedings{le-etal-2024-sharpseq, title = "{S}harp{S}eq: Empowering Continual Event Detection through Sharpness-Aware Sequential-task Learning", author = "Le, Thanh-Thien and Dao, Viet and Nguyen, Linh and Nguyen, Thi-Nhung and Ngo, Linh and Nguyen, Thien", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.200", doi = "10.18653/v1/2024.naacl-long.200", pages = "3632--3644", abstract = "Continual event detection is a cornerstone in uncovering valuable patterns in many dynamic practical applications, where novel events emerge daily. Existing state-of-the-art approaches with replay buffers still suffer from catastrophic forgetting, partially due to overly simplistic objective aggregation. This oversight disregards complex trade-offs and leads to sub-optimal gradient updates, resulting in performance deterioration across objectives. While there are successful, widely cited multi-objective optimization frameworks for multi-task learning, they lack mechanisms to address data imbalance and evaluate whether a Pareto-optimal solution can effectively mitigate catastrophic forgetting, rendering them unsuitable for direct application to continual learning. To address these challenges, we propose **SharpSeq**, a novel continual learning paradigm leveraging sharpness-aware minimization combined with a generative model to balance training data distribution. Through extensive experiments on multiple real-world datasets, we demonstrate the superior performance of SharpSeq in continual event detection, proving the importance of our approach in mitigating catastrophic forgetting in continual event detection.", }
Continual event detection is a cornerstone in uncovering valuable patterns in many dynamic practical applications, where novel events emerge daily. Existing state-of-the-art approaches with replay buffers still suffer from catastrophic forgetting, partially due to overly simplistic objective aggregation. This oversight disregards complex trade-offs and leads to sub-optimal gradient updates, resulting in performance deterioration across objectives. While there are successful, widely cited multi-objective optimization frameworks for multi-task learning, they lack mechanisms to address data imbalance and evaluate whether a Pareto-optimal solution can effectively mitigate catastrophic forgetting, rendering them unsuitable for direct application to continual learning. To address these challenges, we propose **SharpSeq**, a novel continual learning paradigm leveraging sharpness-aware minimization combined with a generative model to balance training data distribution. Through extensive experiments on multiple real-world datasets, we demonstrate the superior performance of SharpSeq in continual event detection, proving the importance of our approach in mitigating catastrophic forgetting in continual event detection.
[ "Le, Thanh-Thien", "Dao, Viet", "Nguyen, Linh", "Nguyen, Thi-Nhung", "Ngo, Linh", "Nguyen, Thien" ]
SharpSeq: Empowering Continual Event Detection through Sharpness-Aware Sequential-task Learning
naacl-long.200
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]