Abstract
Reverse thinking plays a crucial role in human reasoning. Humans can reason not only from a problem to a solution but also in reverse, i.e., start from the solution and reason towards the problem. This often enhances overall reasoning performance as it enables consistency checks between their forward and backward thinking. To enable Large Language Models (LLMs) to perform reverse thinking, we introduce Reverse-Enhanced Thinking (RevThink), a framework composed of data augmentation and learning objectives. In RevThink, we augment the dataset by collecting structured forward-backward reasoning from a teacher model, consisting of: (1) the original question, (2) forward reasoning, (3) backward question, and (4) backward reasoning. We then employ three objectives to train a smaller student model in a multi-task learning fashion: (a) generate forward reasoning from a question, (b) generate a backward question from a question, and (c) generate backward reasoning from the backward question. Experiments across 12 datasets covering commonsense, math, and logical reasoning show an average 13.53% improvement over the student model's zero-shot performance and a 6.84% improvement over the strongest knowledge distillation baselines. Moreover, our method demonstrates sample efficiency -- using only 10% of the correct forward reasoning from the training data, it outperforms a standard fine-tuning method trained on 10x more forward reasoning. RevThink also exhibits strong generalization to out-of-distribution held-out datasets.
Community
Reverse thinking plays a crucial role in human reasoning. Humans can reason from a problem to a solution and also in reverse to enhance their overall reasoning. We show that LLMs can also benefit from reverse thinking👉+13.53% over their zero-shot performance.
In this work, we propose RevThink, a framework with data augmentation and multi-task learning objectives. Unlike standard distillation methods that only fine-tune on correct Q→A pairs, we augment the data using a teacher model to generate backward questions and backward reasoning. The student model thus learns from both Q→A and A→Q directions, outperforming all data augmentation and distillation baselines.
📑 Paper: https://arxiv.org/abs/2411.19865
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision (2024)
- Think Beyond Size: Adaptive Prompting for More Effective Reasoning (2024)
- Language Models are Hidden Reasoners: Unlocking Latent Reasoning Capabilities via Self-Rewarding (2024)
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models (2024)
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? (2024)
- Vision-Language Models Can Self-Improve Reasoning via Reflection (2024)
- Gap-Filling Prompting Enhances Code-Assisted Mathematical Reasoning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper