merge

This is a merge of pre-trained language models created using mergekit. well this is a surprice, is a quite good model,compared to the nice mix r1,and the another one with llama 3.1 r1 as base,this is just the best that I had merged.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using vicgalle/Humanish-Roleplay-Llama-3.1-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
  - model: Undi95/Llama3-Unholy-8B-OAS
  - model : Undi95/Meta-Llama-3.1-8B-Claude
  - model : vicgalle/Humanish-Roleplay-Llama-3.1-8B 
  - model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
  - model: TheDrummer/Llama-3SOME-8B-v2
  - model: Skywork/Skywork-o1-Open-Llama-3.1-8B
  - model: Solshine/reflection-llama-3.1-8B-Solshine-Full
  - model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
  - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
  - model : Sao10K/Llama-3.1-8B-Stheno-v3.4
  - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B
  - model: mergekit-community/hopefully_humanish-rp-nsfw-test-v-retry
  - model: Sao10K/L3-8B-Niitama-v1
  - model: SicariusSicariiStuff/Impish_Mind_8B



merge_method: model_stock
base_model: vicgalle/Humanish-Roleplay-Llama-3.1-8B 
parameters:
  normalize: false
  int8_mask: true
dtype: float16
Downloads last month
13
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Pedro13543/mega_blend_model

Collection including Pedro13543/mega_blend_model