# This is a 4-bit quantized ggml file for use with [llama.cpp](https://github.com/ggerganov/llama.cpp) on the CPU (pre-mmap) or [llama-rs](https://github.com/rustformers/llama-rs) Original model: https://github.com/pointnetwork/point-alpaca # How to run ./llama-cli -m ./ggml-model-q4_0.bin -f ./alpaca_prompt.txt --repl (`llama-cli` is built from https://github.com/rustformers/llama-rs/tree/57440bffb0d946acf73b37e85498c77fc9dfe715)