Edit model card

Gemma-TinyLLama-Passthrough

Gemma-TinyLLama-Passthrough is a merge of the following models using mergekit:

🧩 Configuration

```yaml

models:

- model: unsloth/gemma-7b-bnb-4bit

layer_range: [0, 32]

# no parameters necessary for base model

- model: mistralai/Mistral-7B-v0.1

layer_range: [24, 32]

merge_method: passthrough

# base_model: unsloth/gemma-7b-bnb-4bit

parameters:

normalize: true

int8_mask: true

dtype: float16

slices:

  • sources:
    • model: unsloth/gemma-2b-bnb-4bit layer_range: [0, 16]
  • sources:
    • model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 layer_range: [0, 22] merge_method: passthrough dtype: bfloat16

models:

- model: unsloth/gemma-2b-bnb-4bit

parameters:

density: 0.53

weight: 0.45

- model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

parameters:

weight: 0.5

merge_method: ties

base_model: unsloth/gemma-2b-bnb-4bit

parameters:

int8_mask: true

dtype: bfloat16

```

Downloads last month
7
Safetensors
Model size
2.44B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.