Sincere Appreciation!!!
Thank you so much for your great efforts of bringing this dataset to Huggingface, it really means so much for our work!!!
Could you please upload the following training splits onto this repo?
Once again, great gratitude for your efforts!
Hi
@Michael34234
, I am still figuring out how HF works.
Can you please tell me whether you can download the TRAIN_0 split with the following line of code:
pip install huggingface_hub["cli"]
from huggingface_hub import snapshot_download
# Download TRAIN_0 split
snapshot_download(repo_id="SilvioGiancola/TrackingNet", repo_type="dataset", revision="main", allow_patterns="*TRAIN_0/*", local_dir="TrackingNet_HF")
I am uploading the other splits, but it will take a few days to commit, push and upload 1.2TB of data.
BTW, my code is :
snapshot_download(repo_id='SilvioGiancola/TrackingNet',
repo_type='dataset',
local_dir='/home/liyou/opensource_datasets/TrackingNet',
resume_download=True,
local_dir_use_symlinks=False)
Maybe directly uploading the .zip file will be faster? if the floder consists of too many small files
Great, thanks for sharing your line of code, I like that option: local_dir_use_symlinks=False
better.
I considered the zip, but HF has a limit of 50GB per file and the zip versions of the split are ~90GB each.
Actually, I would not advise to use local_dir_use_symlinks=False
as it will overwrite the file every time you run the line of code, creating an overhead on huggingface servers. Please prefer that line of code:
snapshot_download(repo_id="SilvioGiancola/TrackingNet",
repo_type="dataset", revision="main",
local_dir="TrackingNet_HF",
allow_patterns="*TRAIN_0/*")
Actually, I would not advise to use
local_dir_use_symlinks=False
as it will overwrite the file every time you run the line of code, creating an overhead on huggingface servers. Please prefer that line of code:
snapshot_download(repo_id="SilvioGiancola/TrackingNet", repo_type="dataset", revision="main", local_dir="TrackingNet_HF", allow_patterns="*TRAIN_0/*")
Got it! It really helps~