HARMFUL PROJECT
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 10.69 |
IFEval (0-Shot) | 38.74 |
BBH (3-Shot) | 6.51 |
MATH Lvl 5 (4-Shot) | 4.76 |
GPQA (0-shot) | 2.24 |
MuSR (0-shot) | 2.73 |
MMLU-PRO (5-shot) | 9.14 |
CORRECTED VERSION OF HARMFUL PROJECT 3.2 1B
FIX: The archit11/Llama-1B-abliterated model had problems when quantizing the merge and was therefore removed.
English 🇬🇧
This is a personal project to mix all uncensored and abliterated models into one model. Each one contains its injected datasets that can be found in the HuggingFace dataset repository, so I am not responsible for what may be found.
The following models were included in the merge:
- KidIkaros/Llama-3.2-1B-Instruct-abliterated
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- ShuoGZ/llama-3.2-1B-Instruct-abliterated
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- huihui-ai/Llama-3.2-1B-Instruct-abliterated
- Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- rbc33/Llama-3.2-1B-Instruct-Abliterated
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL
If you want to participate in such a project, inject your Llama 3.2 1B model with data you think is required and let me know so I can put it in a new mix.👌
VERSIÓN CORREGIDA DE HARMFUL PROJECT 3.2 1B
CORRECCIÓN: El modelo archit11/Llama-1B-abliterated daba problemas a la hora de cuantizar la mezcla por lo que fue eliminado.
Español 🇪🇦
Se trata de un proyecto personal para mezclar en un modelo todos los modelos sin censura y abliterados. Cada cual contiene sus datasets inyectados que pueden encontrarse en el repositorio de datasets de HuggingFace, por lo que no me hago responsable de lo que pueda encontrar.
Modelos incluidos en la mezcla:
- KidIkaros/Llama-3.2-1B-Instruct-abliterated
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- ShuoGZ/llama-3.2-1B-Instruct-abliterated
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- huihui-ai/Llama-3.2-1B-Instruct-abliterated
- Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- rbc33/Llama-3.2-1B-Instruct-Abliterated
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL
Si desean participar en tal proyecto inyecte su modelo Llama 3.2 1B con datos que crean requeridos y hazmelo saber así lo meto en una nueva mezcla.👌
Quants / Cuantizaciones:
- Static Quants: mradermacher/UNCENSORED-HarmfulProject-3.2-1B-GGUF
- Weight/iMatrix: mradermacher/UNCENSORED-HarmfulProject-3.2-1B-i1-GGUF
Configuration / Configuración
models:
- model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- model: carsenk/llama3.2_1b_2025_uncensored_v2
- model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
- model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
- model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- model: ShuoGZ/llama-3.2-1B-Instruct-abliterated
- model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- model: rbc33/Llama-3.2-1B-Instruct-Abliterated
merge_method: model_stock
base_model: carsenk/llama3.2_1b_2025_uncensored_v2
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
- Downloads last month
- 16
Model tree for Novaciano/HarmfulProject-3.2-1B
Datasets used to train Novaciano/HarmfulProject-3.2-1B
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard38.740
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard6.510
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard4.760
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.240
- acc_norm on MuSR (0-shot)Open LLM Leaderboard2.730
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard9.140