--- license: other ---
TheBlokeAI

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

# OpenAssistant LLaMA 30B SFT 7 HF This in HF format repo of [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor). It is the result of merging the XORs from the above repo with the original Llama 30B weights. This is epoch 7 of OpenAssistant's training of a Llama 30B model. ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! # Original model card ``` llama-30b-sft-7: dtype: fp16 log_dir: "llama_log_30b" learning_rate: 1e-5 model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500 #model_name: OpenAssistant/llama-30b-super-pretrain output_dir: llama_model_30b deepspeed_config: configs/zero3_config_sft.json weight_decay: 0.0 residual_dropout: 0.0 max_length: 2048 use_flash_attention: true warmup_steps: 20 gradient_checkpointing: true gradient_accumulation_steps: 12 per_device_train_batch_size: 2 per_device_eval_batch_size: 3 eval_steps: 101 save_steps: 485 num_train_epochs: 4 save_total_limit: 3 use_custom_sampler: true sort_by_length: false #save_strategy: steps save_strategy: epoch datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz val_split: 0.05 - vicuna: val_split: 0.05 max_val_set: 800 fraction: 1.0 - dolly15k: val_split: 0.05 max_val_set: 300 - grade_school_math_instructions: val_split: 0.05 - code_alpaca: val_split: 0.05 max_val_set: 250 ``` - **OASST dataset paper:** https://arxiv.org/abs/2304.07327