Edit model card

This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.

The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.

First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.

Second: model fine-tuned with the SlimOrca dataset on llama2 7b.

Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.

We'll add the results once we have the official results

Downloads last month
688
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for yeen214/llama2_7b_merge_orcafamily

Quantizations
3 models

Datasets used to train yeen214/llama2_7b_merge_orcafamily