LLM Modules: Knowledge Transfer from a Large to a Small Model using Enhanced Cross-Attention
Abstract
In this work, we propose an architecture of LLM Modules that enables the transfer of knowledge from a large pre-trained model to a smaller model using an Enhanced Cross-Attention mechanism. In the proposed scheme, the Qwen2-1.5B model is frozen and its representations are passed through specially designed attention layers to the GPT-Neo-125M model, which is trained on limited computational resources. Experimental results on the Bespoke-Stratos-17k dataset demonstrate that after 15 epochs of training, the combined model generates responses comparable in quality to those obtained by distillation. We discuss the advantages of the modular approach, provide examples of input queries and comparative analysis, and outline prospects for further extension of the method.
Community
How to teach a model to reason without retraining it for less than $10
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge Distillation (2024)
- Efficient Knowledge Feeding to Language Models: A Novel Integrated Encoder-Decoder Architecture (2025)
- VisTabNet: Adapting Vision Transformers for Tabular Data (2024)
- Multi-aspect Knowledge Distillation with Large Language Model (2025)
- Skip Tuning: Pre-trained Vision-Language Models are Effective and Efficient Adapters Themselves (2024)
- Occam's model: Selecting simpler representations for better transferability estimation (2025)
- Contrastive Representation Distillation via Multi-Scale Feature Decoupling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper