--- language: - en license: llama3.2 base_model: - meta-llama/Llama-3.2-3B-Instruct model-index: - name: LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 69.31 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 23.81 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 10.42 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 3.24 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 4.05 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 23.64 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B name: Open LLM Leaderboard --- Quick test tune overtop of `meta-llama/Llama-3.2-3B-Instruct` using a ~50/50 mix of instruct and completion data. Note: Training nowhere near complete so I'm unsure how strong of an effect it had. Still refuses requests like `meta-llama/Llama-3.2-3B-Instruct`. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.1-SFT-3B-details) | Metric |Value| |-------------------|----:| |Avg. |22.41| |IFEval (0-Shot) |69.31| |BBH (3-Shot) |23.81| |MATH Lvl 5 (4-Shot)|10.42| |GPQA (0-shot) | 3.24| |MuSR (0-shot) | 4.05| |MMLU-PRO (5-shot) |23.64|