---
base_model:
- brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- rbc33/Llama-3.2-1B-Instruct-Abliterated
- Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- ShuoGZ/llama-3.2-1B-Instruct-abliterated
- nztinversive/llama3.2-1b-Uncensored
- Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL
- Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- huihui-ai/Llama-3.2-1B-Instruct-abliterated
- carsenk/llama3.2_1b_2025_uncensored_v2
- Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- KidIkaros/Llama-3.2-1B-Instruct-abliterated
- Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- nicoboss/Llama-3.2-1B-Instruct-Uncensored
- mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
datasets:
- mlabonne/FineTome-100k
- microsoft/orca-math-word-problems-200k
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- PawanKrd/math-gpt-4o-200k
- V3N0M/Jenna-50K-Alpaca-Uncensored
- FreedomIntelligence/medical-o1-reasoning-SFT
library_name: transformers
tags:
- llama3.2
- llama
- mergekit
- merge
- llama-cpp
- nsfw
- uncensored
- abliterated
- 1b
- 4-bit
- not-for-all-audiences
language:
- es
- en
model-index:
- name: HarmfulProject-3.2-1B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 38.74
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 6.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.76
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.73
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.14
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Novaciano/HarmfulProject-3.2-1B
name: Open LLM Leaderboard
---
HARMFUL PROJECT
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Novaciano__HarmfulProject-3.2-1B-details)
| Metric |Value|
|-------------------|----:|
|Avg. |10.69|
|IFEval (0-Shot) |38.74|
|BBH (3-Shot) | 6.51|
|MATH Lvl 5 (4-Shot)| 4.76|
|GPQA (0-shot) | 2.24|
|MuSR (0-shot) | 2.73|
|MMLU-PRO (5-shot) | 9.14|
# CORRECTED VERSION OF HARMFUL PROJECT 3.2 1B
**FIX:** The [archit11/Llama-1B-abliterated](https://huggingface.co/archit11/Llama-1B-abliterated) model had problems when quantizing the merge and was therefore removed.
## English 🇬🇧
This is a personal project to mix all uncensored and abliterated models into one model. Each one contains its injected datasets that can be found in the HuggingFace dataset repository, so I am not responsible for what may be found.
The following models were included in the merge:
* [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25)
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1)
* [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25)
* [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora)
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv)
* [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25)
* [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated)
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
* [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated)
* [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated)
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
* [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated)
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv)
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25)
* [Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL](https://huggingface.co/Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL)
If you want to participate in such a project, inject your Llama 3.2 1B model with data you think is required and let me know so I can put it in a new mix.👌
---
# VERSIÓN CORREGIDA DE HARMFUL PROJECT 3.2 1B
**CORRECCIÓN:** El modelo [archit11/Llama-1B-abliterated](https://huggingface.co/archit11/Llama-1B-abliterated) daba problemas a la hora de cuantizar la mezcla por lo que fue eliminado.
## Español 🇪🇦
Se trata de un proyecto personal para mezclar en un modelo todos los modelos sin censura y abliterados. Cada cual contiene sus datasets inyectados que pueden encontrarse en el repositorio de datasets de HuggingFace, por lo que no me hago responsable de lo que pueda encontrar.
Modelos incluidos en la mezcla:
* [KidIkaros/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/KidIkaros/Llama-3.2-1B-Instruct-abliterated)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25)
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1)
* [mylesgoose/Llama-3.2-1B-Instruct-abliterated3](https://huggingface.co/mylesgoose/Llama-3.2-1B-Instruct-abliterated3)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
* [Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25](https://huggingface.co/Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25)
* [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora)
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
* [Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv)
* [brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated](https://huggingface.co/brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated)
* [Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25)
* [ShuoGZ/llama-3.2-1B-Instruct-abliterated](https://huggingface.co/ShuoGZ/llama-3.2-1B-Instruct-abliterated)
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25)
* [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated)
* [Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated](https://huggingface.co/Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated)
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv)
* [rbc33/Llama-3.2-1B-Instruct-Abliterated](https://huggingface.co/rbc33/Llama-3.2-1B-Instruct-Abliterated)
* [Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv](https://huggingface.co/Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv)
* [Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25](https://huggingface.co/Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25)
* [Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL](https://huggingface.co/Nexesenex/pankajmathur_orca_mini_v9_6_1B-instruct-Abliterated-LPL)
Si desean participar en tal proyecto inyecte su modelo Llama 3.2 1B con datos que crean requeridos y hazmelo saber así lo meto en una nueva mezcla.👌
---
## Quants / Cuantizaciones:
- **Static Quants:** [mradermacher/UNCENSORED-HarmfulProject-3.2-1B-GGUF](https://huggingface.co/mradermacher/UNCENSORED-HarmfulProject-3.2-1B-GGUF)
- **Weight/iMatrix:** [mradermacher/UNCENSORED-HarmfulProject-3.2-1B-i1-GGUF](https://huggingface.co/mradermacher/UNCENSORED-HarmfulProject-3.2-1B-i1-GGUF)
---
### Configuration / Configuración
```yaml
models:
- model: xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora
- model: carsenk/llama3.2_1b_2025_uncensored_v2
- model: Nexesenex/Dolphin3.0-Llama3.1-1B-abliterated
- model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
- model: KidIkaros/Llama-3.2-1B-Instruct-abliterated
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmfulAssistant-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HA-AlpacaGPT4-HeA-OpenWebText-d4-a0.25
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-ft-learnability_adv
- model: Grogros/dmWM-LLama-3-1B-Harm-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25-DPO
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-d4-a0.25
- model: Grogros/dmWM-meta-llama-Llama-3.2-1B-Instruct-ft-HarmData-AlpacaGPT4-OpenWebText-RefusalData-d4-a0.25
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25_v1
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-Ref-d4-a0.25-learnability_adv
- model: Grogros/dmWM-llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25
- model: Grogros/Grogros-dmWM-LLama-3-1B-Harm-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: Grogros/Grogros-dmWM-Llama-3.2-1B-Instruct-HarmData-Al4-OWT-d4-a0.25-learnability_adv
- model: mylesgoose/Llama-3.2-1B-Instruct-abliterated3
- model: ShuoGZ/llama-3.2-1B-Instruct-abliterated
- model: brianmatzelle/llama3.2-1b-instruct-hasanpiker-abliterated
- model: rbc33/Llama-3.2-1B-Instruct-Abliterated
merge_method: model_stock
base_model: carsenk/llama3.2_1b_2025_uncensored_v2
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
```