merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using pankajmathur/orca_mini_v9_7_3B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: prithivMLmods/Llama-3.2-3B-Math-Oct
        layer_range: [0, 28]
      - model: AXCXEPT/EZO-Llama-3.2-3B-Instruct-dpoE
        layer_range: [0, 28]
      - model: pankajmathur/orca_mini_v9_7_3B-Instruct
        layer_range: [0, 28]
      - model: xMaulana/FinMatcha-3B-Instruct
        layer_range: [0, 28]
      - model: ValiantLabs/Llama3.2-3B-Enigma
        layer_range: [0, 28]
      - model: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
        layer_range: [0, 28]

merge_method: ties
base_model: pankajmathur/orca_mini_v9_7_3B-Instruct
parameters:
  density: 0.5
  weight: 0.5
Downloads last month
7
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mergekit-community/CodeMix-JPID-3B-Llama3.2