LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding
Abstract
We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16x on summarization for CNN/DM documents, 1.82x on coding, and 2.0x on TOPv2 semantic parsing task. We open source our code and checkpoints at https://github.com/facebookresearch/LayerSkip.
Community
Wow this is very good
Author here. Thanks for posting. I have created a thread on X to explain the paper: https://twitter.com/m_elhoushi/status/1783800052986655203
Happy to answer any questions
Plain english rewrite of the paper here, would love your feedback as an author! https://www.aimodels.fyi/papers/arxiv/layer-skip-enabling-early-exit-inference-self
How so? Because of the adaptive computation nature?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration (2024)
- Direct Alignment of Draft Model for Speculative Decoding with Chat-Fine-Tuned LLMs (2024)
- Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding (2024)
- Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy (2024)
- Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Supercharging AI: How LayerSkip Enhances Language Model Speed and Efficiency
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 7
Browse 7 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper