LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Abstract
We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. For example, training on the context length of 8192 needs 16x computational costs in self-attention layers as that of 2048. In this paper, we speed up the context extension of LLMs in two aspects. On the one hand, although dense global attention is needed during inference, fine-tuning the model can be effectively and efficiently done by sparse local attention. The proposed shift short attention effectively enables context extension, leading to non-trivial computation saving with similar performance to fine-tuning with vanilla attention. Particularly, it can be implemented with only two lines of code in training, while being optional in inference. On the other hand, we revisit the parameter-efficient fine-tuning regime for context expansion. Notably, we find that LoRA for context extension works well under the premise of trainable embedding and normalization. LongLoRA demonstrates strong empirical results on various tasks on LLaMA2 models from 7B/13B to 70B. LongLoRA adopts LLaMA2 7B from 4k context to 100k, or LLaMA2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like FlashAttention-2. In addition, to make LongLoRA practical, we collect a dataset, LongQA, for supervised fine-tuning. It contains more than 3k long context question-answer pairs.
Community
Here are my highlights from the paper:
Big one of course: LongLoRA efficiently fine-tunes large AI models on longer texts
Key points:
- Approximates standard attention via "shift short attention" during training
- Tuning only a subset of weights (LoRA) plus some embeddings & norms
- Fine-tuned 7B parameter model on 100k tokens with 1 machine
- 10x lower training cost than full fine-tuning for large contexts
- Close to full fine-tuning performance, e.g. 3% higher perplexity
The core insight is that an approximation of full attention enables efficient training while retaining standard attention for final inference. Combined with selective weight tuning, this really reduces compute needs.
I think this demonstrates the potential to train more capable AI without unreasonable resources. Efficient training techniques = more powerful LLMs for the same resources.
Full summary: https://notes.aimodels.fyi/longlora-a-new-efficient-fine-tuning-of-long-context-llms/
Code?
Context window is why I almost exclusively use Claude 2 over GPT-4 now, despite GPT-4 being better at reasoning. I assume context window will be an anachronism at some point in the near future, but for now, this is great progress.
How well does an extended context window deal with position of the expanded context? I.e. how much, if any is "lost in the middle"? https://arxiv.org/abs/2307.03172
From Table 6 in this paper, looks like this model may not exhibit this issue, at least for the test they ran (which is not exactly the same as the "lost in the middle" paper). However, LongChat-13B which is evaluated in both papers does not show the increase in accuracy toward the end of the context window in Table 6, unlike the paper you linked, so maybe there are some differences in the test.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training (2023)
- LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning (2023)
- Efficient Streaming Language Models with Attention Sinks (2023)
- DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning (2023)
- Scaled Prompt-Tuning for Few-Shot Natural Language Generation (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
That took 6 weeks for context window to be an anachronism π
Extend Your Context with LongLoRA: Next-Level Large Language Models
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/