grimjim's picture
Update README.md
d21d189 verified
|
raw
history blame
1.48 kB
metadata
base_model: grimjim/Llama-3-Oasis-v1-OAS-8B
library_name: transformers
quanted_by: grimjim
pipeline_tag: text-generation
license: cc-by-nc-4.0

Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2

This is an 8bpw exl2 quant of grimjim/Llama-3-Oasis-v1-OAS-8B

This model is a merge of pre-trained language models created using mergekit.

Built with Meta Llama 3.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using mlabonne/NeuralDaredevil-8B-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 32]
    model: mlabonne/NeuralDaredevil-8B-abliterated
  - layer_range: [0, 32]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      weight: 0.3
  - layer_range: [0, 32]
    model: Hastagaras/Halu-OAS-8B-Llama3
    parameters:
      weight: 0.3