File size: 923 Bytes
3e7eeea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
pipeline_tag: text-generation
quanted_by: grimjim
---
EXL2 quants of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct/tree/main) by branch:
- 4_0 : [4.0 bits per weight](https://huggingface.co/grimjim/meta-llama-Llama-3.2-1B-Instruct-exl2/tree/4_0)    
- 5_0 : [5.0 bits per weight](https://huggingface.co/grimjim/meta-llama-Llama-3.2-1B-Instruct-exl2/tree/5_0)    
- 6_0 : [6.0 bits per weight](https://huggingface.co/grimjim/meta-llama-Llama-3.2-1B-Instruct-exl2/tree/6_0)    
- 8_0 : [8.0 bits per weight](https://huggingface.co/grimjim/meta-llama-Llama-3.2-1B-Instruct-exl2/tree/8_0)    

Make your own EXL2 quants with
[measurement.json](https://huggingface.co/grimjim/meta-llama-Llama-3.2-1B-Instruct-exl2/blob/main/measurement.json).

Quanted with [exllamav2](https://github.com/turboderp/exllamav2) v0.2.4.