File size: 1,385 Bytes
0f4f1cf
 
 
a437579
 
 
 
 
 
 
 
 
 
 
 
 
 
ed635e9
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: llama2
---

---
license: llama2
---


**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)  
**Code**: https://github.com/princeton-nlp/LLM-Shearing  
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)  
**Pruned Models without Continued Pre-training**: [Sheared-LLaMA-1.3B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-Pruned), [Sheared-LLaMA-2.7B-Pruned](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-Pruned)
**Instruction-tuned Models**: [Sheared-LLaMA-1.3B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT), [Sheared-LLaMA-2.7B-ShareGPT](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT)  

**License**: Must comply with license of Llama2 since it's a model derived from Llama2.

Sheared-LLaMA-2.7B-Pruned is the model pruned from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) **without continued pre-training**. 
We used roughly 0.4B tokens to perform the pruning experiment. This model could be a good use to study 
- effective data mixtures for continued pre-training
- comparisons to other pruning techniques
- extensive evaluations to understand how pruning affects knowledge and reasoning capabilities of LLMs