--- license: cc-by-nc-4.0 datasets: - jondurbin/bagel-v0.3 base_model: decapod-research/Antares-11b-v1 model-index: - name: Antares-11b-v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.17 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=decapoda-research/Antares-11b-v2 name: Open LLM Leaderboard --- Fine-tune of Upstage AI's SOLAR-10.7B-Instruct-v1.0 model, using the OpenHermes, Platypus, and Capybara datasets. Additionally fine-tuned on Jon Durbin's Bagel v0.3, plus a few unreleased datasets. Fine-tuned on 8x4090s for 1.25 epochs. ### Model Sources [optional] - **Repository:** TBD - **Demo:** TBD ## Bias, Risks, and Limitations This fine-tune has had zero alignment, safety data, or anything else shoved down it's throat. ## Training Details ### Training Data See the sidebar for links to the relevant datasets. ### Training Procedure Trained using QLORA via the Axolotl tool. ## Evaluation TBD ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_decapoda-research__Antares-11b-v2) | Metric |Value| |---------------------------------|----:| |Avg. |70.94| |AI2 Reasoning Challenge (25-Shot)|69.03| |HellaSwag (10-Shot) |87.54| |MMLU (5-Shot) |66.19| |TruthfulQA (0-shot) |59.17| |Winogrande (5-shot) |83.19| |GSM8k (5-shot) |60.50|