ptoro's picture
Upload folder using huggingface_hub
afb6b63 verified
|
raw
history blame
3.66 kB
metadata
base_model: microsoft/phi-1_5
library_name: peft
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: outputs/phi-sft-out
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: microsoft/phi-1_5
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: ptoro/honkers-phi
    type: alpaca

dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/phi-sft-out

sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true

adapter: qlora
lora_model_dir:
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 4
optimizer: adamw_torch
adam_beta2: 0.95
adam_epsilon: 0.00001
max_grad_norm: 1.0
lr_scheduler: cosine
learning_rate: 0.000003

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: True
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 100
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.1
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
special_tokens:
  pad_token: "<|endoftext|>"

outputs/phi-sft-out

This model is a fine-tuned version of microsoft/phi-1_5 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5482

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
1.2333 0.0106 1 1.5896
1.7286 0.2553 24 1.5891
1.2823 0.5106 48 1.5875
1.3856 0.7660 72 1.5844
1.244 1.0213 96 1.5804
1.2499 1.2447 120 1.5753
1.1656 1.5 144 1.5706
1.1928 1.7553 168 1.5656
1.1623 2.0106 192 1.5608
1.2679 2.2340 216 1.5571
1.2845 2.4894 240 1.5537
1.1226 2.7447 264 1.5516
1.2575 3.0 288 1.5497
1.2465 3.2234 312 1.5486
1.1699 3.4787 336 1.5483
1.2021 3.7340 360 1.5482

Framework versions

  • PEFT 0.11.2.dev0
  • Transformers 4.41.1
  • Pytorch 2.1.2+cu118
  • Datasets 2.19.1
  • Tokenizers 0.19.1