runtime error
0:04, 43.2MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 96%|█████████▋| 4.47G/4.63G [01:35<00:03, 41.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 97%|█████████▋| 4.49G/4.63G [01:36<00:02, 50.5MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 97%|█████████▋| 4.50G/4.63G [01:36<00:03, 39.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.52G/4.63G [01:36<00:02, 47.7MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.53G/4.63G [01:37<00:02, 41.6MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.55G/4.63G [01:37<00:01, 45.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.56G/4.63G [01:37<00:01, 43.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 99%|█████████▉| 4.58G/4.63G [01:38<00:00, 53.4MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 99%|█████████▉| 4.59G/4.63G [01:38<00:00, 50.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|█████████▉| 4.61G/4.63G [01:38<00:00, 56.7MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|█████████▉| 4.62G/4.63G [01:38<00:00, 51.2MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|██████████| 4.63G/4.63G [01:39<00:00, 55.4MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|██████████| 4.63G/4.63G [01:39<00:00, 46.8MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--Llama-2-7B-Chat-GGML/snapshots/00109c56c85ca9015795ca06c272cbc65c7f1dbf/llama-2-7b-chat.ggmlv3.q5_0.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> llm = Llama( File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 340, in __init__ assert self.model is not None AssertionError
Container logs:
Fetching error logs...