Edit model card

image/png

[HIGHLY EXPERIMENTAL]

(Sister model: https://huggingface.co/Undi95/Unholy-v1-10L-13B)

Use at your own risk, I'm not responsible for any usage of this model, don't try to do anything this model tell you to do.

Uncensored.

If you are censored, it's maybe because of keyword like "assistant", "Factual answer", or other "sweet words" like I call them that trigger the censoring accross all the layer of the model (since they're all trained on some of them in a way).

12L : This is a test project, uukuguy/speechless-llama2-luban-orca-platypus-13b and jondurbin/spicyboros-13b-2.2 was used for a merge, then, I deleted the first 8 layers to add 8 layers of MLewd at the beginning, and do the same from layers 16 to 20, trying to break all censoring possible, before merging the output with MLewd at 0.33 weight.

Description

This repo contains fp16 files of Unholy v1, an uncensored model.

Models used

  • uukuguy/speechless-llama2-luban-orca-platypus-13b
  • jondurbin/spicyboros-13b-2.2
  • Undi95/MLewd-L2-13B-v2-3

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Exemple:

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 50.65
ARC (25-shot) 63.57
HellaSwag (10-shot) 83.75
MMLU (5-shot) 58.08
TruthfulQA (0-shot) 51.09
Winogrande (5-shot) 77.27
GSM8K (5-shot) 11.07
DROP (3-shot) 9.73
Downloads last month
1,435
Safetensors
Model size
13B params
Tensor type
F32
·
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Undi95/Unholy-v1-12L-13B

Quantizations
3 models

Space using Undi95/Unholy-v1-12L-13B 1

Collection including Undi95/Unholy-v1-12L-13B