Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: Delta-Vector/Holland-4B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: NewEden/CivitAI-SD-Prompts
#    type:
#      system_prompt: ""
#      system_format: "<|im_start|>system\n{system}<|im_end|>\n"
#      field_system: instruction
#      field_instruction: input
#      field_input: ""
#      field_output: output
#      no_input_format: "<|im_start|>user\n{instruction}<|im_end|>\n<|im_start|>assistant\n"

#      system_prompt: ""
#      field_instruction: instruction
#      field_input: input
#      field_output: output
#      format: |-
#        <|im_start|>system
#        {instruction}<|im_end|>
#        <|im_start|>user
#        {input}<|im_end|>
#        <|im_start|>assistant
#        {output}

    type: alpaca
    conversation: mpt-30b-instruct
#    field_system: instruction
#    field_instruction: input
#    field_input: input
#    field_output: output
chat_template: alpaca

dataset_prepared_path:
val_set_size: 0.02
output_dir: ./outputs/out2
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true

adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project: SDprompterV2
wandb_entity:
wandb_watch:
wandb_name: SDprompterV2
wandb_log_model:

gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 3
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_ratio: 0.05
evals_per_epoch: 4
saves_per_epoch: 1
debug:
weight_decay: 0.05
deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
#deepspeed:
special_tokens:
  pad_token: <|finetune_right_pad_id|>

outputs/out2

This model is a fine-tuned version of Delta-Vector/Holland-4B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3207

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
3.6981 0.1576 1 4.5728
3.8616 0.3153 2 4.1908
3.1772 0.6305 4 3.7547
2.9103 0.9458 6 3.5690
2.7797 1.2315 8 3.4499
2.6686 1.5468 10 3.3910
2.6075 1.8621 12 3.3576
2.508 2.1527 14 3.3302
2.4712 2.4680 16 3.3232
2.4607 2.7833 18 3.3207

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for NewEden/SD-Prompter-r3

Finetuned
this model
Quantizations
1 model