mpt-7b-wizardlm / README.md
winglian's picture
Update README.md
076017b
|
raw
history blame
238 Bytes
metadata
datasets:
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
  - en

WizardLM (Uncensored) finetuned on the MPT-7B model

Trained 3 epochs on 1 x A100 80GB

https://wandb.ai/wing-lian/mpt-wizard-7b/runs/2agnd9fz