runtime error

Exit code: 1. Reason: config.json: 0%| | 0.00/1.52k [00:00<?, ?B/s] config.json: 100%|██████████| 1.52k/1.52k [00:00<00:00, 7.43MB/s] model.safetensors: 0%| | 0.00/892M [00:00<?, ?B/s] model.safetensors: 1%| | 10.8M/892M [00:01<01:44, 8.46MB/s] model.safetensors: 5%|▍ | 42.2M/892M [00:02<00:44, 18.9MB/s] model.safetensors: 13%|█▎ | 116M/892M [00:03<00:18, 41.1MB/s]  model.safetensors: 100%|█████████▉| 892M/892M [00:04<00:00, 205MB/s] generation_config.json: 0%| | 0.00/142 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 142/142 [00:00<00:00, 1.04MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 6, in <module> pipe = pipeline("text-generation", model="benjleite/t5-french-qa") File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 1047, in pipeline tokenizer = AutoTokenizer.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 953, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2020, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'benjleite/t5-french-qa'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'benjleite/t5-french-qa' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.

Container logs:

Fetching error logs...