Hjgugugjhuhjggg's picture
Upload folder using huggingface_hub
ee710f5 verified
---
base_model:
- huihui-ai/MicroThinker-1B-Preview
- Hjgugugjhuhjggg/llama-3.2-1B-spinquant-hf
- huihui-ai/Llama-3.2-1B-Instruct-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [huihui-ai/Llama-3.2-1B-Instruct-abliterated](https://huggingface.co/huihui-ai/Llama-3.2-1B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/MicroThinker-1B-Preview](https://huggingface.co/huihui-ai/MicroThinker-1B-Preview)
* [Hjgugugjhuhjggg/llama-3.2-1B-spinquant-hf](https://huggingface.co/Hjgugugjhuhjggg/llama-3.2-1B-spinquant-hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- layer_range: [0, 1]
model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
value: 100
- method: int4
value: 100
- layer_range: [0, 1]
model: huihui-ai/MicroThinker-1B-Preview
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
value: 100
- method: int4
value: 100
- layer_range: [0, 1]
model: Hjgugugjhuhjggg/llama-3.2-1B-spinquant-hf
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
value: 100
- method: int4
value: 100
merge_method: linear
base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
value: 100
- method: int4
value: 100
parameters:
weight: 1
density: 0.9
gamma: 0.01
normalize: true
int8_mask: true
random_seed: 0
temperature: 0.5
top_p: 0.65
inference: true
max_tokens: 999999999
stream: true
quantization:
- method: int8
value: 100
- method: int4
value: 100
dtype: float16
```