--- license: other --- This is a https://huggingface.co/chavinlo/alpaca-native converted in GGML (alpaca.cpp) format and quantized to 4 bits to run on CPU with 5GB of RAM. For any additional information, please visit repos above alpaca.cpp repo: https://github.com/antimatter15/alpaca.cpp llama.cpp repo: https://github.com/ggerganov/llama.cpp original facebook llama(NOT ggml) repo: https://github.com/facebookresearch/llama