--- inference: false license: other ---
TheBlokeAI

Chat & support: my new Discord server

Want to contribute? TheBloke's Patreon page

# rewoo's Planner 7B GGML These files are GGML format model files for [rewoo's Planner 7B](https://huggingface.co/rewoo/planner_7B). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Planner-7B-GPTQ) * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Planner-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Planner-7B-fp16) ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)! llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508 I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them. ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | planner-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.83 GB | 6.33 GB | 4-bit. | | planner-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.24 GB | 6.74 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | planner-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.65 GB | 7.15 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. | | planner-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | 5-bit. Even higher accuracy, resource usage and slower inference. | | planner-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.13 GB | 9.63 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m planner-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p ` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann. Thank you to all my generous patrons and donaters! # Original model card: rewoo's Planner 7B Alpaca Lora adapter weight fine-tuned on following instruction dataset. https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation We use following parameter. ``` python finetune.py \ --base_model 'decapoda-research/llama-7b-hf' \ --data_path 'rewoo/planner_instruction_tuning_2k' \ --output_dir './lora-alpaca-planner' \ --batch_size 128 \ --micro_batch_size 8 \ --num_epochs 10 \ --learning_rate 1e-4 \ --cutoff_len 1024 \ --val_set_size 200 \ --lora_r 8 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj,v_proj]' \ --train_on_inputs \ --group_by_length \ --resume_from_checkpoint 'tloen/alpaca-lora-7b' ```