library_name: peft | |
tags: | |
- meta-llama/Llama-2-7b-hf | |
- code | |
- instruct | |
- instruct-code | |
- logical-reasoning | |
- Platypus2 | |
datasets: | |
- garage-bAInd/Open-Platypus | |
base_model: meta-llama/Llama-2-7b-hf | |
We finetuned Meta-Llama/Llama-2-7b-hf on the Open-Platypus dataset (garage-bAInd/Open-Platypus) for 5 epochs using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm). | |
#### About OpenPlatypus Dataset | |
OpenPlatypus is focused on improving LLM logical reasoning skills and was used to train the Platypus2 models. The dataset is comprised of various sub-datasets, including PRM800K, ScienceQA, SciBench, ReClor, TheoremQA, among others. These were filtered using keyword search and Sentence Transformers to remove questions with a similarity above 80%. The dataset includes contributions under various licenses like MIT, Creative Commons, and Apache 2.0. | |
The finetuning session got completed in 1 hour and 30 minutes and costed us only `$15` for the entire finetuning run! | |
#### Hyperparameters & Run details: | |
- Model Path: meta-llama/Llama-2-7b-hf | |
- Dataset: garage-bAInd/Open-Platypus | |
- Learning rate: 0.0002 | |
- Number of epochs: 5 | |
- Data split: Training: 90% / Validation: 10% | |
- Gradient accumulation steps: 1 | |
--- | |
license: apache-2.0 | |
--- | |