LeroyDyer's picture
Update README.md
100db29 verified
|
raw
history blame
1.56 kB
---
base_model:
- LeroyDyer/Mixtral_AI_MediTron
- LeroyDyer/Mixtral_AI_CyberTron
library_name: transformers
tags:
- mergekit
- merge
- code
- biology
- chemistry
- medical
- not-for-all-audiences
- text-generation-inference
- legal
- finance
datasets:
- medalpaca/medical_meadow_pubmed_causal
- ruslanmv/ai-medical-chatbot
- medalpaca/medical_meadow_mediqa
license: mit
---
not the highest of the collection but (secretly the leader as all loras will be pplied to this model; etc as i work its this model will be improved towards growing the other mdoels of which may skew!)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [LeroyDyer/Mixtral_AI_MediTron](https://huggingface.co/LeroyDyer/Mixtral_AI_MediTron)
* [LeroyDyer/Mixtral_AI_CyberTron](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LeroyDyer/Mixtral_AI_MediTron
layer_range: [0, 32]
- model: LeroyDyer/Mixtral_AI_CyberTron
layer_range: [0, 32]
merge_method: slerp
base_model: LeroyDyer/Mixtral_AI_CyberTron
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```