.safetensors conversion error (Not sure this app is still supported)

#40
by ArtemZinkov-Spalah - opened

I tried to convert meta-llama/Llama-3.2-1B (both downloading from hugging face, and having it locally on my machine), and it raises error.
Error log is bellow

How this can be resolved?

Starting python converter
scikit-learn version 1.3.1 is not supported. Minimum required version: 0.17. Maximum required version: 1.1.2. Disabling scikit-learn conversion API.
Initializing DiffusionPipeline with meta-llama/Llama-3.2-1B..
Initializing UNet2DConditionModel with meta-llama/Llama-3.2-1B..
Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 187, in main
File "diffusers/pipelines/pipeline_utils.py", line 1078, in from_pretrained
File "diffusers/pipelines/pipeline_utils.py", line 1637, in download
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/hf_api.py", line 1686, in model_info
File "huggingface_hub/hf_api.py", line 5719, in _build_hf_headers
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 198, in main
File "diffusers/pipelines/pipeline_utils.py", line 1078, in from_pretrained
File "diffusers/pipelines/pipeline_utils.py", line 1637, in download
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/hf_api.py", line 1686, in model_info
File "huggingface_hub/hf_api.py", line 5719, in _build_hf_headers
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "diffusers/configuration_utils.py", line 370, in load_config
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/file_download.py", line 1217, in hf_hub_download
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 500, in
File "guernikatools/torch2coreml.py", line 209, in main
File "diffusers/models/modeling_utils.py", line 704, in from_pretrained
File "diffusers/configuration_utils.py", line 414, in load_config
OSError: Can't load config for 'meta-llama/Llama-3.2-1B'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-3.2-1B' is the correct path to a directory containing a config.json file
[41050] Failed to execute script 'torch2coreml' due to unhandled exception: Can't load config for 'meta-llama/Llama-3.2-1B'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-3.2-1B' is the correct path to a directory containing a config.json file
[41050] Traceback:
Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 187, in main
File "diffusers/pipelines/pipeline_utils.py", line 1078, in from_pretrained
cached_folder = cls.download(
^^^^^^^^^^^^^
File "diffusers/pipelines/pipeline_utils.py", line 1637, in download
info = model_info(
^^^^^^^^^^^
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/hf_api.py", line 1686, in model_info
File "huggingface_hub/hf_api.py", line 5719, in _build_hf_headers
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 198, in main
File "diffusers/pipelines/pipeline_utils.py", line 1078, in from_pretrained
cached_folder = cls.download(
^^^^^^^^^^^^^
File "diffusers/pipelines/pipeline_utils.py", line 1637, in download
info = model_info(
^^^^^^^^^^^
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/hf_api.py", line 1686, in model_info
File "huggingface_hub/hf_api.py", line 5719, in _build_hf_headers
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "diffusers/configuration_utils.py", line 370, in load_config
config_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/file_download.py", line 1217, in hf_hub_download
File "huggingface_hub/utils/_validators.py", line 118, in _inner_fn
File "huggingface_hub/utils/_headers.py", line 121, in build_hf_headers
File "huggingface_hub/utils/_headers.py", line 153, in get_token_to_send
huggingface_hub.utils._headers.LocalTokenNotFoundError: Token is required (token=True), but no token found. You need to provide a token or be logged in to Hugging Face with huggingface-cli login or huggingface_hub.login. See https://huggingface.co/settings/tokens.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 500, in
File "guernikatools/torch2coreml.py", line 209, in main
File "diffusers/models/modeling_utils.py", line 704, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
^^^^^^^^^^^^^^^^
File "diffusers/configuration_utils.py", line 414, in load_config
raise EnvironmentError(
OSError: Can't load config for 'meta-llama/Llama-3.2-1B'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'meta-llama/Llama-3.2-1B' is the correct path to a directory containing a config.json file

Sign up or log in to comment