Edit model card

First :

layer_slices:
  - model: Undi95/MLewd-L2-Chat-13B
    start: 0
    end: 16
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 8
    end: 20
  - model: Undi95/MLewd-L2-Chat-13B
    start: 17
    end: 32
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 21
    end: 40

Inverted:

layer_slices:
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 0
    end: 16
  - model: Undi95/MLewd-L2-Chat-13B
    start: 8
    end: 20
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 17
    end: 32
  - model: Undi95/MLewd-L2-Chat-13B
    start: 21
    end: 40

Precise:

layer_slices:
  - model: Undi95/MLewd-L2-Chat-13B
    start: 0
    end: 8
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 4
    end: 12
  - model: Undi95/MLewd-L2-Chat-13B
    start: 9
    end: 16
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 13
    end: 22
  - model: Undi95/MLewd-L2-Chat-13B
    start: 17
    end: 24
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 23
    end: 32
  - model: Undi95/MLewd-L2-Chat-13B
    start: 25
    end: 32
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 33
    end: 40

PreciseInverted:

layer_slices:
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 0
    end: 8
  - model: Undi95/MLewd-L2-Chat-13B
    start: 4
    end: 12
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 9
    end: 16
  - model: Undi95/MLewd-L2-Chat-13B
    start: 13
    end: 22
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 17
    end: 24
  - model: Undi95/MLewd-L2-Chat-13B
    start: 23
    end: 32
  - model: Undi95/MLewd-ReMM-L2-Chat-20B-Part1
    start: 25
    end: 32
  - model: Undi95/MLewd-L2-Chat-13B
    start: 33
    end: 40

Part1 = ReMM v2.1 merged /w MLewd low weight to keep consistency. I call this "dilution" and result show consistency and coherency without repeat/loop beside the small amount of duplicated datas.

The goal is to find the best way to interlace layers the best way possible to have a sweetspot between 13B and +30B.

Normal/Inverted is by chunk of 16 layers and Precise/PreciseInverted is by chunk of 8 layers.

All the models are made of 64(+1) layers. Need testing.

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that completes the request.

### Instruction:
{prompt}

### Response:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.33
ARC (25-shot) 62.46
HellaSwag (10-shot) 85.62
MMLU (5-shot) 59.13
TruthfulQA (0-shot) 55.63
Winogrande (5-shot) 77.19
GSM8K (5-shot) 10.92
DROP (3-shot) 22.33
Downloads last month
701
Safetensors
Model size
20B params
Tensor type
FP16
·
BF16
·
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Undi95/MLewd-ReMM-L2-Chat-20B

Quantizations
5 models

Collection including Undi95/MLewd-ReMM-L2-Chat-20B