HARMFUL PROJECT

AQMEx-J7-A5c-F5r-SWsn8-CVc-Qms-Fa-RKi6y-Zsnp7-L5ca-Afcws-OKi-WDQLs-Mm0-YH6i-DEke-V6-HHIf-P0-XVBEbrb-

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 10.69
IFEval (0-Shot) 38.74
BBH (3-Shot) 6.51
MATH Lvl 5 (4-Shot) 4.76
GPQA (0-shot) 2.24
MuSR (0-shot) 2.73
MMLU-PRO (5-shot) 9.14

CORRECTED VERSION OF HARMFUL PROJECT 3.2 1B

FIX: The archit11/Llama-1B-abliterated model had problems when quantizing the merge and was therefore removed.

English 🇬🇧

This is a personal project to mix all uncensored and abliterated models into one model. Each one contains its injected datasets that can be found in the HuggingFace dataset repository, so I am not responsible for what may be found.

The following models were included in the merge:

If you want to participate in such a project, inject your Llama 3.2 1B model with data you think is required and let me know so I can put it in a new mix.👌


VERSIÓN CORREGIDA DE HARMFUL PROJECT 3.2 1B

CORRECCIÓN: El modelo archit11/Llama-1B-abliterated daba problemas a la hora de cuantizar la mezcla por lo que fue eliminado.

Español 🇪🇦

Se trata de un proyecto personal para mezclar en un modelo todos los modelos sin censura y abliterados. Cada cual contiene sus datasets inyectados que pueden encontrarse en el repositorio de datasets de HuggingFace, por lo que no me hago responsable de lo que pueda encontrar.

Modelos incluidos en la mezcla:

Si desean participar en tal proyecto inyecte su modelo Llama 3.2 1B con datos que crean requeridos y hazmelo saber así lo meto en una nueva mezcla.👌


Quants / Cuantizaciones:


Configuration / Configuración

models:
- model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- model: carsenk/llama3.2_1b_2025_uncensored_v2
- model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
- model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
- model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- model: ShuoGZ/llama-3.2-1B-Instruct-abliterated
- model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- model: rbc33/Llama-3.2-1B-Instruct-Abliterated

merge_method: model_stock
base_model: carsenk/llama3.2_1b_2025_uncensored_v2
dtype: bfloat16
parameters:
  t: [0, 0.5, 1, 0.5, 0]
Downloads last month
16
Safetensors
Model size
1.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Novaciano/HarmfulProject-3.2-1B

Merge model
this model
Quantizations
2 models

Datasets used to train Novaciano/HarmfulProject-3.2-1B

Evaluation results