elinas's picture
Update README.md
53397a0 verified
---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# Llama-3-13B-Instruct
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The goal was to create a Llama 3 13B model to have a "mid" sized model which Meta has released in the past, but I would consider this a base model to be further finetuned on.
Surprisingly, it is usable for chat and storywriting with Llama 3 Instruct template, though it does occasionally have some grammatical quirks like L3-120B.
Logical ability (programming, math, science, etc.) has been deteriorated by the merge process.
Use **<u>no repetition penalty or <1.05</u>** or it might go a bit haywire, other than that, it is suitable for writing use. I have not tested it against L3 8B in that regard.
## Finetuned Version
A finetuned version of this model can be found at [elinas/Llama-3-13B-Instruct-ft](https://huggingface.co/elinas/Llama-3-13B-Instruct-ft) which seems to improve performance.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 10]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [5, 15]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [10, 20]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [15, 25]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [20, 25]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [22, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
```
## Model Evaluation
TBD - submitted