Edit model card

llama3-15b-v02

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

  • D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
  • D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
  weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
- sources:
  - layer_range: [0, 1]
    model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
  - layer_range: [0, 1]
    model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1
    parameters:
      weight: 0
- sources:
  - layer_range: [1, 24]
    model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
  - layer_range: [1, 24]
    model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1   
- sources:
  - layer_range: [24, 32]
    model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1
    parameters:
      weight: 0
  - layer_range: [24, 32]
    model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct

Downloads last month
5
GGUF
Model size
8.03B params
Architecture
llama

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .