Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
French
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Clean refs/convert/duckdb

#4
by zinc75 - opened
Laboratoire de Mécanique des Structures et des Systèmes Couplés org

cc @albertvillanova @lhoestq @severo

We are in the process of re-uploading the dataset in its final form. The main branch has a substantial commit history with over 1,950 commits, as we tested various combinations of subsets in earlier preliminary versions.

Once the upload is complete, we plan to use super_squash_history to clean the main branch. From my understanding, after running the super_squash_history command, only the latest commit will remain, and there will be no references to the old subsets from the preliminary versions. However, I have noticed that the refs/convert/duckdb directory contains numerous subfolders corresponding to outdated subsets that we no longer use.

I have reviewed the documentation but could not find any instructions on how to completely clean this auto-generated branch, so only the subfolders corresponding to the latest commit subsets remain.

Could you please advise on how to handle this gracefully? We prefer not to delete and recreate the repository, as this would result in losing over 100 access requests that we want to retain.

Thanks for your help.

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

cc @albertvillanova @lhoestq @severo

There are also more than 3000 commits in the auto-generated refs/convert/parquet branch .

Is it safe to use super_squash_historyon this branch to remove all commits corresponding to old versions of the dataset / subsets ?

Best regards,

Laboratoire de Mécanique des Structures et des Systèmes Couplés org
edited Jun 11

FYI, one more comment: we tried using super_squash_history on the main branch, and it completely disrupted the push_to_hub functionality.

Every time we attempted to push the new version of the dataset to the squashed main branch, we encountered the following error:

403 Forbidden: Access Denied.
Cannot access content at: https://huggingface.co/datasets/Cnam-LMSSC/vibravox.git/info/lfs/objects/batch. 
If you are trying to create or update content, make sure you have a token with the `write` role.

This is puzzling since it's not a token issue (we managed to push a dummy dataset to this repo, including Git-LFS files, but not the actual vibravox dataset files). We also tried pushing to a new branch, thinking the problem was specific to the main branch, but the same error occurred.

Our current workaround is to create another dataset (vibravox2) on the hub, push our files there, duplicate the README.md from vibravox, delete the existing vibravox repo, and then rename vibravox2 to vibravox. The downside is that we would lose the 100+ pending access requests for the gated dataset, discussions, and likes. The upside is that the refs/convert/* branches would be clean again.

Given that we've tried numerous workarounds and read many related issues on GitHub with no successful solution, it seems the problem might be server-side rather than on our end.

Could you investigate and possibly fix this on your side to allow us to push our dataset to vibravox, or should we proceed with our cumbersome workaround?

Best regards,

cc @albertvillanova @lhoestq @severo @polinaeterna

Laboratoire de Mécanique des Structures et des Systèmes Couplés org
edited Jun 12

Surprisingly, this worked:

from datasets import load_dataset

dataset_speech_noisy = load_dataset("Cnam-LMSSC/vibravox2", "speech_noisy")
dataset_speech_noisy.push_to_hub("Cnam-LMSSC/vibravox", "speech_noisy")

But the classic upload of local files to "Cnam-LMSSC/vibravox" still produces the forbidden access error. (while the same code works for any other dataset like "Cnam-LMSSC/vibravox2").

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

Hi ! No it's not the same, the github error you mention is different (related to LFS while your is related to permissions).

Have you tried to re-create a token just in case the current one has an issue ?

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

Thanks for your reply, but yes, I already tried that. @zinc75 mentioned this issue because the error only appears when pushing lfs files.

Ok, we are investigating internally

We found that some files that were not deleted correctly and removed them, you can try uploading the dataset again now :)

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

It's working well now, thanks ! 👌

Laboratoire de Mécanique des Structures et des Systèmes Couplés org

Going back to the original question, do you have any idea of why the refs/convert/duckdb branch keeps old files, even after we applied super_squash_history ?

super_squash_history is only for the main branch afaik. Is it an issue for you to have outdated files in the parquet reference branch ? (It's not part of the repo when you git clone)

Laboratoire de Mécanique des Structures et des Systèmes Couplés org
edited Jul 3

Hi @lhoestq , this is not a blocking issue of course for git clones / load_dataset() usage, but this is quite confusing to see there references to splits / subsets that do not exist anymore in the dataset in its final form (the super_squash_history command has been achieved to erase references to those old splits / subsets).

What is a even more puzzling is the fact that the auto-generated branch refs/convert/parquet do not contain the speech_clean subset (which is the more important subset of this dataset), and that the refs/convert/duckdb branch contains this subset (in an outdated version).

Indeed, the date of latest update of the parquet files in refs/convert/parquet and refs/convert/duckdb are inconsistent with the latests pushes to the main branch with the final files of the dataset (21 days ago for parquet branch -- missing the speech_clean files -- , and 21 or 27 days ago depending on the subsets for the duckdb branch, whereas the final dataset has been pushed to the main branch 18 days ago).

Are those branches only generated / updated when the dataset viewer is activated ? We desactivated the dataset viewer due to this issue, still pending as P1 on Github.

Is this a problem that those branch are not up to date with the main branch for use with integrated libraries ?

Best regards,

Yes they are only updated when the viewer is updated, disabling the viewer didn't remove those files as it should be. We'll work on a mechanism to clean the branch when the Viewer is disabled

Sign up or log in to comment