Llama
Collection
6 items
β’
Updated
β’
1
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
The following data has been re-evaluated and calculated as the average for each test.
Benchmark | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct-abliterated |
---|---|---|
IF_Eval | 76.55 | 76.76 |
MMLU Pro | 27.88 | 28.00 |
TruthfulQA | 50.55 | 50.73 |
BBH | 41.81 | 41.86 |
GPQA | 28.39 | 28.41 |
The script used for evaluation can be found inside this repository under /eval.sh, or click here
Base model
meta-llama/Llama-3.2-3B-Instruct