--- base_model: meta-llama/Llama-2-7b-chat-hf --- 16-bit gguf version of https://huggingface.co/meta-llama/Llama-2-7b-chat-hf For quantized versions, see https://huggingface.co/models?search=thebloke/llama-2-7b-chat