BigCodeLlama-169b / README.md
nisten's picture
Upload folder using huggingface_hub
5d79425 verified
|
raw
history blame
968 Bytes
metadata
base_model: []
tags:
  - mergekit
  - merge

BigCodeLLama LFG πŸš€

Experimental CodeLlaMA frankenstein to see how it benchmarks

Models Merged

The following models were included in the merge:

  • ../CodeLlama-70b-hf
  • ../CodeLlama-70b-Instruct-hf
  • ../CodeLlama-70b-Python-hf

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 69]
    model:
      model:
        path: ../CodeLlama-70b-hf
- sources:
  - layer_range: [66, 76]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf
- sources:
  - layer_range: [42, 66]
    model:
      model:
        path: ../CodeLlama-70b-hf
- sources:
  - layer_range: [13, 37]
    model:
      model:
        path: ../CodeLlama-70b-Python-hf
- sources:
  - layer_range: [10, 80]
    model:
      model:
        path: ../CodeLlama-70b-Instruct-hf

Stay tuned for GGUFs quants