GoldenLlama-3.1-8B

GoldenLlama-3.1-8B is a merge of the following models using mergekit:

🧩 Configuration


slices:
  - sources:
    - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored
      layer_range: [0, 25]
  - sources:
    - model: NousResearch/Hermes-3-Llama-3.1-8B
      layer_range: [25, 32]
merge_method: passthrough
dtype: bfloat16

Downloads last month
8
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for bunnycore/GoldenLlama-3.1-8B

Quantizations
3 models