merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

name:                Enhanced-TIES-Base-v1
# Defining the TIES-merged base model used in the SLERP merge above.
merge_method:        dare_ties
base_model:          sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3 # Solid base model
tokenizer_source:    base # Base tokenizer
dtype:               bfloat16 # Efficient dtype
out_dtype:           bfloat16 # Output in bfloat16

parameters:
  normalize:         true # Normalize weights for TIES
  int8_mask:         true  # Int8 mask for TIES
  rescale:           false # No rescaling for TIES
  density:           0.75  # Density for TIES merge

models: # Models for the TIES base merge (same models and densities as Enhanced-LayeredSlerp-v1)
  - model:           arcee-ai/Virtuoso-Small-v2      # IFEval specialist - high density
    parameters:
      weight:        1.0
      density:       0.9
  - model:           sthenno-com/miscii-14b-1225   # BBH and Reasoning - medium density
    parameters:
      weight:        1.0
      density:       0.8
  - model:           sometimesanotion/Qwenvergence-14B-v12-Prose-DS # MATH and general Qwen - medium density
    parameters:
      weight:        1.0
      density:       0.8
  - model:           CultriX/Qwen2.5-14B-Hyperionv4 # General improvement - lower density
    parameters:
      weight:        1.0
      density:       0.6


# Commentary:
# =============================================================================
# SuperMerge-LayeredTIES-v1 Commentary:
#
# This configuration combines the strengths of both Enhanced-LayeredSlerp-v1 and SuperMerge-Enhanced-v1.
# It leverages the robust foundation of a TIES-merged base model (Enhanced-TIES-Base-v1) and applies
# the layer-wise module approach and fine-grained weight control from SuperMerge-Enhanced-v1 in a SLERP merge.
#
# Key Features:
#   - TIES-Merged Base Foundation:  Uses 'Enhanced-TIES-Base-v1' as the base model for the SLERP merge.
#     This TIES base provides a selectively merged and potentially more efficient starting point, incorporating
#     strengths from multiple models (Virtuoso, Phi-4, Qwenvergence, DeepSeek) with density control.
#
#   - Layer-wise Module Integration in SLERP:  Maintains the module-based slice structure from SuperMerge-Enhanced-v1.
#     The SLERP merge now combines the TIES-merged base with specialized modules for Reasoning, IFEval, and MATH/Knowledge
#     at different layer ranges, using explicit weights for fine-grained control.
#
#   - Benchmark-Driven Iterative Weight Tuning:  The configuration is designed to be optimized through a
#     benchmark-driven iterative weight tuning process (as described in the refined SuperMerge-Enhanced-v1 approach).
#     The initial weights provided are starting points and need to be systematically tuned based on benchmark results.
#
# Tuning Process (Same as Refined SuperMerge-Enhanced-v1):
#   1. Initial Benchmarking: Run a full benchmark suite.
#   2. Performance Analysis: Examine per-benchmark scores and compare to source models.
#   3. Targeted Weight Adjustments: Adjust layer weights based on performance analysis (e.g., increase IFEval module weight
#      in early layers if IFEval is weak).
#   4. Iterate: Repeat steps 1-3. Make small, incremental adjustments in each iteration.
#
# Rationale:
#   - By using a TIES-merged base, we aim to create a more robust and potentially efficient foundation for the SLERP merge.
#   - The layer-wise module approach and fine-grained weights in SLERP still allow for precise control over the blending
#     of specialized capabilities at different network depths, building upon the solid TIES base.
#   - The emphasis on a benchmark-driven iterative weight tuning process remains crucial for achieving optimal performance.
#
# Next Steps:
#   - Implement this configuration using MergeKit.
#   - Run initial benchmarks to establish a baseline.
#   - Begin the iterative benchmark-driven weight tuning process to optimize performance.
# =============================================================================
Downloads last month
68
Safetensors
Model size
14.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for CultriX/Enhanced-TIES-Base-v1

Spaces using CultriX/Enhanced-TIES-Base-v1 4