CorticalStack's picture
Update README.md
617ce1f verified
metadata
license: apache-2.0
tags:
  - sft
dataset:
  - teknium/openhermes
base_model:
  - unsloth/mistral-7b-bnb-4bit

mistral-7b-openhermes-sft

mistral-7b-openhermes-sft is an SFT fine-tuned version of unsloth/mistral-7b-bnb-4bit using the teknium/openhermes dataset.

Fine-tuning configuration

LoRA

  • r: 256
  • LoRA alpha: 128
  • LoRA dropout: 0.0

Training arguments

  • Epochs: 1
  • Batch size: 4
  • Gradient accumulation steps: 6
  • Optimizer: adamw_torch_fused
  • Max steps: 100
  • Learning rate: 0.0002
  • Weight decay: 0.1
  • Learning rate scheduler type: linear
  • Max seq length: 2048
  • 4-bit bnb: True

Trained with Unsloth and Huggingface's TRL library.