--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge - alpaca - mistral - not-for-all-audiences - nsfw base_model: - icefog72/Kunokukulemonchini-32k-7b - icefog72/Mixtral_AI_Cyber_3.m1-BigL - grimjim/kukulemon-32K-7B - LeroyDyer/Mixtral_AI_Cyber_3.m1 - Nitral-AI/Kunocchini-7b-128k-test - Undi95/BigL-7B model-index: - name: IceLemonTeaRP-32k-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.53 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.76 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceLemonTeaRP-32k-7b name: Open LLM Leaderboard --- # IceLemonTeaRP-32k-7b ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/PSpmU4l79CrdfzbCd-xSf.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Cooked merge from fresh ingredients to fix [icefog72/IceTeaRP-7b](https://huggingface.co/icefog72/IceTeaRP-7b) repetition problems. Prompt template: Alpaca, maybe ChatML * measurement.json for quanting exl2 included. - [IceLemonTeaRP-32k-7b-4.0bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-4.0bpw-h6-exl2) - [IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-4.2bpw-h6-exl2) - [IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-6.5bpw-h6-exl2) - [IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b-8.0bpw-h6-exl2) Thanks mradermacher for [mradermacher/IceTeaRP-7b-GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF) ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * icefog72/Kunokukulemonchini-32k-7b * [grimjim/kukulemon-32K-7B](https://huggingface.co/grimjim/kukulemon-32K-7B) * [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test) * icefog72/Mixtral_AI_Cyber_3.m1-BigL * [LeroyDyer/Mixtral_AI_Cyber_3.m1](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_3.m1) * [Undi95/BigL-7B](https://huggingface.co/Undi95/BigL-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Mixtral_AI_Cyber_3.m1-BigL layer_range: [0, 32] - model: Kunokukulemonchini-32k-7b layer_range: [0, 32] merge_method: slerp base_model: Kunokukulemonchini-32k-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__IceLemonTeaRP-32k-7b) | Metric |Value| |---------------------------------|----:| |Avg. |70.43| |AI2 Reasoning Challenge (25-Shot)|67.66| |HellaSwag (10-Shot) |86.53| |MMLU (5-Shot) |64.51| |TruthfulQA (0-shot) |61.76| |Winogrande (5-shot) |79.72| |GSM8k (5-shot) |62.40|