NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b

This is a merge of multiple models brought together using the awesome VortexMerge kit.

Let's see what we've got in this merge:

🧩 Configuration

models:
  - model: mlabonne/NeuralBeagle14-7B
    # no parameters necessary for base model
  - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
    parameters:
      density: 0.5
      weight: 0.5
  - model: eldogbbhed/NeuralMonarchCoderPearlBeagle
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
76
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for eldogbbhed/NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b

Quantizations
1 model