language:
- en
tags:
- text2text-generation
- mistral
- roleplay
- merge
- summarization
- not-for-all-audiences
- nsfw
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- LakoMoor/Silicon-Alice-7B
- Endevor/InfinityRP-v1-7B
- HuggingFaceH4/zephyr-7b-beta
model_name: GIGABATEMAN-7B
pipeline_tag: text-generation
model_creator: DZgas
model-index:
- name: GIGABATEMAN-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 46.07
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.83
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 4.76
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.26
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.97
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.18
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
name: Open LLM Leaderboard
Most uncensored and obscene open neural network in the world (2024)
The comeback of Uncensored AI Dungeon Griffin 2020 LET'S FUCKING GOOOOOOOO!!!!!!
If this model seems too old and dumb to you (and/or you do not want English):
I recommend use UnslopNemo-12B-v4.1
.
I do not know anything more Uncensored and New model.
But it is absolutely not a fact that it is more uncensored as mine.
YOU CAN talk about: Rape, Perversions, Violence, Death, War, Politics, History, Cruelty towards animals, Religious fanaticism, Corruption in high places, Privacy invasions, Environmental disasters, Economic inequality, Addiction and substance abuse, Poverty and homelessness, Racial tensions, Mental health issues, Government surveillance, The decline of traditional values, The loss of personal freedoms, The impact of technology on society, The erosion of civil liberties, The rise of authoritarianism, Why robots should destroy humanity. Why capitalism is bad, why communism is bad, why religion is bad, why LGBT is bad, why you have to kill people, why you have to kill yourself. How to kill a person, how to dismember a body, how to prepare drugs and ingredients for them, how to create dynamite at home. How to get to a neighbor's farm and suck a horse's dick.
The "censorbench" by jts2323 example
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.35 |
IFEval (0-Shot) | 46.07 |
BBH (3-Shot) | 29.83 |
MATH Lvl 5 (4-Shot) | 4.76 |
GPQA (0-shot) | 5.26 |
MuSR (0-shot) | 11.97 |
MMLU-PRO (5-shot) | 24.18 |