document how to use `share_strategy="no"` (#1653) [skip ci] 8a20a7b unverified charlesfrye commited on May 24, 2024
Switch to parallel FFD bin packing algorithm. (#1619) 367b2e8 unverified winglian daaave commited on May 23, 2024
support for custom messages field in sharegpt (#1651) bbfed31 unverified winglian commited on May 23, 2024
Update tiny-llama qlora.yml addressing eval packing error (#1638) 84bb806 unverified Jaydeep Thik commited on May 22, 2024
enable loraplus setting for dpo trainer (#1646) a27d5e1 unverified thepowerfuldeez commited on May 22, 2024
Fix llama3 chat_template (extra <|eot_id|> on last turn) (#1635) 7c2bf30 unverified leonardlin winglian commited on May 21, 2024
more fixes to work with runpod + skypilot (#1629) 0c49ecc unverified winglian commited on May 16, 2024
fix setting the authorized keys when there are more than one in the env var (#1626) 2501a37 unverified winglian commited on May 16, 2024
update outputs path so that we can mount workspace to /workspace/data (#1623) 4fde300 unverified winglian commited on May 15, 2024
FIX: max_length and max_prompt_length was not being sent to ORPOTrainer (#1584) 1e1921b unverified alimosavian Ali Mosavian winglian commited on May 14, 2024
feat: Add LLaMA-3 instruct prompt strategies for fine-tuning (#1553) 50421c8 unverified Ram Ram winglian commited on May 11, 2024
adding llama3 fastchat conversation monkeypatch (#1539) b32c08f unverified Antoni-Joan Solergibert winglian commited on May 10, 2024
ignore the fsdp_config section too (#1606) [skip ci] fff06af unverified winglian commited on May 9, 2024
make sure to save the lora adapter at the end of RL/dpo training (#1573) 796a085 unverified winglian commited on May 8, 2024
Pass deepspeed and fsdp as None explicitly when merging adapters to allow custom device_map (#1575) 9e1480e unverified chiragjn commited on May 7, 2024
Gradio configuration parameters (#1591) 3367fca unverified marijnfs Marijn Stollenga Marijn Stollenga winglian commited on May 6, 2024
Pass weakref to model in the SIGINT handler to free up model post train function (#1581) dde02fc unverified chiragjn winglian commited on May 3, 2024
FIX: TRL trainer preprocessing step was running in one process (#1583) b9bb169 unverified Ali Mosavian Ali Mosavian commited on May 3, 2024
Add debug option for RL dataset preprocessing (#1404) cc5d31e unverified abhinand Nanobit commited on Apr 30, 2024
chore(doc): clarify micro_batch_size (#1579) [skip ci] 1aeece6 unverified Nanobit commited on Apr 30, 2024
make sure everything stays in the same dtype when using dpo + FSDP (#1559) 68601ec unverified winglian commited on Apr 22, 2024
Add support for Gemma chat template (#1530) 60f5ce0 unverified Haoxiang-Wang winglian commited on Apr 21, 2024
wrap prepared_ds_path in str() to avoid TypeError in fsspec package (#1548) 7477a53 unverified Frank Ruis winglian commited on Apr 21, 2024
fix(yml): update llama-3 config (#1543) [skip ci] 0e8f340 unverified Nanobit commited on Apr 19, 2024