runtime error
Exit code: 1. Reason: B/s] Downloading shards: 67%|βββββββ | 2/3 [00:19<00:09, 9.75s/it][A model-00003-of-00003.safetensors: 0%| | 0.00/2.60G [00:00<?, ?B/s][A model-00003-of-00003.safetensors: 6%|β | 147M/2.60G [00:01<00:16, 144MB/s][A model-00003-of-00003.safetensors: 19%|ββ | 482M/2.60G [00:02<00:08, 252MB/s][A model-00003-of-00003.safetensors: 46%|βββββ | 1.20G/2.60G [00:03<00:03, 459MB/s][A model-00003-of-00003.safetensors: 64%|βββββββ | 1.66G/2.60G [00:04<00:02, 434MB/s][A model-00003-of-00003.safetensors: 98%|ββββββββββ| 2.56G/2.60G [00:05<00:00, 594MB/s][A model-00003-of-00003.safetensors: 100%|ββββββββββ| 2.60G/2.60G [00:05<00:00, 489MB/s] Downloading shards: 100%|ββββββββββ| 3/3 [00:25<00:00, 7.82s/it][A Downloading shards: 100%|ββββββββββ| 3/3 [00:25<00:00, 8.44s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 74, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 261, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4159, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1555, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1699, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...