runtime error
Exit code: 1. Reason: The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s][A Loading pipeline components...: 20%|██ | 1/5 [00:01<00:05, 1.27s/it][A Loading pipeline components...: 60%|██████ | 3/5 [00:05<00:03, 1.78s/it][A Loading pipeline components...: 100%|██████████| 5/5 [00:05<00:00, 1.04s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 13, in <module> ).to("cuda") File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 454, in to module.to(device, dtype) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3156, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
Container logs:
Fetching error logs...