--- license: mit base_model: microsoft/phi-2 tags: - axolotl - generated_from_trainer - phi - phi-2 - logical - reasoning - transformers - text-generation-inference model-index: - name: phi-2-logical-sft results: [] datasets: - garage-bAInd/Open-Platypus --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer hub_model_id: MaziyarPanahi/phi-2-logical-sft hf_use_auth_token: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: garage-bAInd/Open-Platypus type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./phi-2-logical-sft-out sequence_len: 4096 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ```

# phi-2-logical-sft This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0075 ## Prompt Template ``` ### Instruction: (without the <>) ### Response: ``` ## Examples ``` You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. ### Instruction: Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. ### Response: ``` ``` Goku woke up one day in his hometown of Earth and went out for his daily training routine. He was feeling confident and happy as usual, until he received news that his dear friend Kirby had teamed up with Majin Buu to destroy the world. Shocked and saddened by this news, Goku immediately knew he had to act fast to save his friends and loved ones. With his superhuman strength and incredible speed, he set off on a mission to stop the evil duo from carrying out their destructive plan. Along the way, Goku encountered many challenges and obstacles, but he never gave up. He fought fiercely against Kirby and Majin Buu, using all of his powers and skills to try and defeat them. Finally, after a long and grueling battle, Goku emerged victorious. He managed to convince Kirby to turn against Majin Buu and join forces with him to defeat the evil being once and for all. Together, they were able to defeat Majin Buu and save the world from destruction. Goku felt a sense of relief and happiness wash over him as he realized that his friends were safe and the world was no longer in danger. He knew that he had done everything he could to protect those he cared about, and he was grateful for the opportunity to use his powers for good. <|endoftext|> ``` ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8319 | 0.0 | 1 | 1.0229 | | 0.8799 | 0.25 | 71 | 1.0208 | | 0.8349 | 0.5 | 142 | 1.0119 | | 0.7798 | 0.76 | 213 | 1.0093 | | 0.8317 | 1.01 | 284 | 1.0083 | | 0.777 | 1.24 | 355 | 1.0080 | | 0.7544 | 1.49 | 426 | 1.0075 | | 0.7037 | 1.74 | 497 | 1.0075 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.0