[READ IF YOU DO NOT HAVE ACCESS] Getting access to the model
Hey all! If you do not have access to the model yet, please follow the next steps
- Go to the model page
- Read the Llama 3 Community License Agreement
- At the bottom of the agreement, there's an Accept License button, click it
Meta reviews the applications approximately once an hour. Please wait.
Do not request a DOI, that's unrelated to getting access to the model
Thanks!
you closed the thread (#97) and pointed people here without fixing the issue, everyone isn't an idiot, there are access permission issues AFTER gaining access permissioins to the model.
an hour! awesome. Also, thanks jeevansreenivas for the token info
Thanks but my request has been pending for a month! Is there any way to re-send it?
Thanks but my request has been pending for a month! Is there any way to re-send it?
We're working on having a way to allow people to manage their repo requests and withdraw them so they can re-submit if needed
Not sure if it helps. I have 403 error with the access permission granted, but the error is gone after I add the repo in "Setting --> Access Tokens --> Edit Permissions --> Repositories permissions".
When I asked for access as Ivan from Russia, I was refused.
When I made a new account and asked for access as Ivan from Ukraine, I was given access.
Just funny.
hello , I have been granted access to the model repo but when i try to connect to it from colab notebook it says its gated repo and i dont have acess. help please ?
I have put acess token and checked it , its all good. so whats the issue ?
I chose China cause I lived in China. After about a week it says I was rejected. That is absolutely geographical discrimination! We need explaination!
Thanks but my request has been pending for a month! Is there any way to re-send it?
same issue!
where is the ' Accept License button'?
+1 where is the ' Accept License button'?
Even after i've been granted access to the model, when I try to download using huggingface-cli by login in I get the below error
"Cannot access gated repo for URL https://huggingface.co/api/models/meta-llama/Meta-Llama-3-8B-Instruct/revision/main.
Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must be authenticated to access it."
I am having the same issue
I had the same issue and I think it was because I chose the default fine-grain permissions when i applied for access. I went back in to "manage permissions" and clicked all the unchecked check-boxes and then tried again and it worked. Hope this helps.
reject in a second,what did I do wrong
enable developer mode on windows to activate symlinks
it's a limitation on windows
https://huggingface.co/docs/huggingface_hub/en/installation#windows-limitations
I had the same issue and I think it was because I chose the default fine-grain permissions when i applied for access. I went back in to "manage permissions" and clicked all the unchecked check-boxes and then tried again and it worked. Hope this helps.
This is what worked for me. My token had fine-grained permissions, so I checked unchecked boxes related to read permissions.
what I need to type in input Affiliation?
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B-Instruct is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Tried both token types : finegrained (with repo permissions) and write tokens. I already have access to the gated model.
This is a very annoying error even when loading the model and it needs to be addressed clearly.
@osanseviero I have been rejected when I apply a permission for accessing Llama3 models. Can you help me?
i have been rejected and i wonder why. And how to resubmit? Thanks for helping @osanseviero
I have been rejected because I forgot to agree to share my contact information, can I re-submit and how to re-submit? Thanks for helping @osanseviero
These rounders work together against people - one blocks ai models, another youtube. Great job!
I can't find "Accept License button"
i have been rejected and i wonder why. And how to resubmit? Thanks for helping @osanseviero
How to do that? Thanks
Hello I have been granted a permission to access the repo, [Access granted] Your request to access model meta-llama/Meta-Llama-3-70B-Instruct has been accepted
But when I'm running the command "llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml" I'm getting the below error.
Make sure to have access to it at https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct.
401 Client Error.
I have cleared the browser cache, invalidated the token, Setting --> Access Tokens --> Edit Permissions --> Repositories permissions. But, still the same . Please can you help in resolving this access error
Thanks in advance.
I have been rejected and I wonder why. And how to resubmit? Thanks for helping @osanseviero
Hi, This is the complete stack trace where I'm getting the 401 error after running the line "llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml" . I do have the Read permission under access Tokens and also Gated Repository its showing as the Meta's Llama 3 models accepted on 26th of Sept. Please can you help here ?
But still getting the error on running the below line on my CLI
CLI :
llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
**** ERROR ***
10/03/2024 17:01:32 - INFO - llamafactory.hparams.parser - Process rank: 0, device: mps, n_gpu: 1, distributed training: False, compute dtype: torch.bfloat16
Traceback (most recent call last):
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
return f(*args, **kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1232, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1339, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1854, in _raise_on_head_call_error
raise head_call_error
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1746, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1666, in get_hf_file_metadata
r = _request_wrapper(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 364, in _request_wrapper
response = _request_wrapper(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 388, in _request_wrapper
hf_raise_for_status(response)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 423, in hf_raise_for_status
raise _format(GatedRepoError, message, response) from e
huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-66febfdc-3797d9f9247e68fc1a2f13c6;0142d17f-17ea-4627-8398-cad729b6db2a)
Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must have access to it and be authenticated to access it. Please log in.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/bin/llamafactory-cli", line 8, in
sys.exit(main())
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/LLaMA-Factory/src/llamafactory/cli.py", line 111, in main
run_exp()
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 44, in run_sft
tokenizer_module = load_tokenizer(model_args)
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/LLaMA-Factory/src/llamafactory/model/loader.py", line 69, in load_tokenizer
config = load_config(model_args)
File "/Users/pradeepgondhichatnahalli/LLaMA-Factory/LLaMA-Factory/src/llamafactory/model/loader.py", line 122, in load_config
return AutoConfig.from_pretrained(model_args.model_name_or_path, **init_kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 1006, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/transformers/configuration_utils.py", line 567, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/transformers/configuration_utils.py", line 626, in _get_config_dict
resolved_config_file = cached_file(
File "/Users/pradeepgondhichatnahalli/.pyenv/versions/3.9.11/lib/python3.9/site-packages/transformers/utils/hub.py", line 421, in cached_file
raise EnvironmentError(
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct.
401 Client Error. (Request ID: Root=1-66febfdc-3797d9f9247e68fc1a2f13c6;0142d17f-17ea-4627-8398-cad729b6db2a)
Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must have access to it and be authenticated to access it. Please log in.
Please ignore, the above issue was resolved after running the below command from the CLI :
huggingface-cli login
Then you need to provide your huggingface Token ID. Hugging face requires to authenticate the access to the Gated Repo using the Access Token. This is purely a access issue. Please run the above command and provide the access token. It works !!.
Dear, my request is in "pending" status since two weeks. Could you please provide support?