LoRS: Efficient Low-Rank Adaptation for Sparse Large Language Model
Abstract
Existing low-rank adaptation (LoRA) methods face challenges on sparse large language models (LLMs) due to the inability to maintain sparsity. Recent works introduced methods that maintain sparsity by augmenting LoRA techniques with additional masking mechanisms. Despite these successes, such approaches suffer from an increased memory and computation overhead, which affects efficiency of LoRA methods. In response to this limitation, we introduce LoRS, an innovative method designed to achieve both memory and computation efficiency when fine-tuning sparse LLMs. To mitigate the substantial memory and computation demands associated with preserving sparsity, our approach incorporates strategies of weight recompute and computational graph rearrangement. In addition, we also improve the effectiveness of LoRS through better adapter initialization. These innovations lead to a notable reduction in memory and computation consumption during the fine-tuning phase, all while achieving performance levels that outperform existing LoRA approaches.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation (2025)
- CLoQ: Enhancing Fine-Tuning of Quantized LLMs via Calibrated LoRA Initialization (2025)
- Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices (2024)
- Refining Salience-Aware Sparse Fine-Tuning Strategies for Language Models (2024)
- Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models (2025)
- Gradient Weight-normalized Low-rank Projection for Efficient LLM Training (2024)
- One Head Eight Arms: Block Matrix based Low Rank Adaptation for CLIP-based Few-Shot Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper