Meta's LLaMA 7B - AWQ GGUF
These files are in GGUF format.
The model was converted by the combination of llama.cpp and quantization method AWQ
How to use models in llama.cpp
./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"
Please refer to the instructions at the PR