--- base_model: - arcee-ai/Virtuoso-Small-v2 - sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3 - CultriX/Qwen2.5-14B-Hyperionv4 - sometimesanotion/Qwenvergence-14B-v12-Prose-DS - sthenno-com/miscii-14b-1225 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3](https://huggingface.co/sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3) as a base. ### Models Merged The following models were included in the merge: * [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) * [CultriX/Qwen2.5-14B-Hyperionv4](https://huggingface.co/CultriX/Qwen2.5-14B-Hyperionv4) * [sometimesanotion/Qwenvergence-14B-v12-Prose-DS](https://huggingface.co/sometimesanotion/Qwenvergence-14B-v12-Prose-DS) * [sthenno-com/miscii-14b-1225](https://huggingface.co/sthenno-com/miscii-14b-1225) ### Configuration The following YAML configuration was used to produce this model: ```yaml name: Enhanced-TIES-Base-v1 # Defining the TIES-merged base model used in the SLERP merge above. merge_method: dare_ties base_model: sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3 # Solid base model tokenizer_source: base # Base tokenizer dtype: bfloat16 # Efficient dtype out_dtype: bfloat16 # Output in bfloat16 parameters: normalize: true # Normalize weights for TIES int8_mask: true # Int8 mask for TIES rescale: false # No rescaling for TIES density: 0.75 # Density for TIES merge models: # Models for the TIES base merge (same models and densities as Enhanced-LayeredSlerp-v1) - model: arcee-ai/Virtuoso-Small-v2 # IFEval specialist - high density parameters: weight: 1.0 density: 0.9 - model: sthenno-com/miscii-14b-1225 # BBH and Reasoning - medium density parameters: weight: 1.0 density: 0.8 - model: sometimesanotion/Qwenvergence-14B-v12-Prose-DS # MATH and general Qwen - medium density parameters: weight: 1.0 density: 0.8 - model: CultriX/Qwen2.5-14B-Hyperionv4 # General improvement - lower density parameters: weight: 1.0 density: 0.6 # Commentary: # ============================================================================= # SuperMerge-LayeredTIES-v1 Commentary: # # This configuration combines the strengths of both Enhanced-LayeredSlerp-v1 and SuperMerge-Enhanced-v1. # It leverages the robust foundation of a TIES-merged base model (Enhanced-TIES-Base-v1) and applies # the layer-wise module approach and fine-grained weight control from SuperMerge-Enhanced-v1 in a SLERP merge. # # Key Features: # - TIES-Merged Base Foundation: Uses 'Enhanced-TIES-Base-v1' as the base model for the SLERP merge. # This TIES base provides a selectively merged and potentially more efficient starting point, incorporating # strengths from multiple models (Virtuoso, Phi-4, Qwenvergence, DeepSeek) with density control. # # - Layer-wise Module Integration in SLERP: Maintains the module-based slice structure from SuperMerge-Enhanced-v1. # The SLERP merge now combines the TIES-merged base with specialized modules for Reasoning, IFEval, and MATH/Knowledge # at different layer ranges, using explicit weights for fine-grained control. # # - Benchmark-Driven Iterative Weight Tuning: The configuration is designed to be optimized through a # benchmark-driven iterative weight tuning process (as described in the refined SuperMerge-Enhanced-v1 approach). # The initial weights provided are starting points and need to be systematically tuned based on benchmark results. # # Tuning Process (Same as Refined SuperMerge-Enhanced-v1): # 1. Initial Benchmarking: Run a full benchmark suite. # 2. Performance Analysis: Examine per-benchmark scores and compare to source models. # 3. Targeted Weight Adjustments: Adjust layer weights based on performance analysis (e.g., increase IFEval module weight # in early layers if IFEval is weak). # 4. Iterate: Repeat steps 1-3. Make small, incremental adjustments in each iteration. # # Rationale: # - By using a TIES-merged base, we aim to create a more robust and potentially efficient foundation for the SLERP merge. # - The layer-wise module approach and fine-grained weights in SLERP still allow for precise control over the blending # of specialized capabilities at different network depths, building upon the solid TIES base. # - The emphasis on a benchmark-driven iterative weight tuning process remains crucial for achieving optimal performance. # # Next Steps: # - Implement this configuration using MergeKit. # - Run initial benchmarks to establish a baseline. # - Begin the iterative benchmark-driven weight tuning process to optimize performance. # ============================================================================= ```