url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.03B
1.84B
| node_id
stringlengths 18
19
| number
int64 3.11k
6.12k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
36.2k
β | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4475/comments | https://api.github.com/repos/huggingface/datasets/issues/4475/events | https://github.com/huggingface/datasets/pull/4475 | 1,267,798,451 | PR_kwDODunzps45eufw | 4,475 | Improve error message for missing pacakges from inside dataset script | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer",
"Closing in favor of #4484. "
] | 2022-06-10T16:59:36 | 2022-10-06T13:46:26 | 2022-06-13T13:16:43 | CONTRIBUTOR | null | Improve the error message for missing packages from inside a dataset script:
With this change, the error message for missing packages for `bigbench` looks as follows:
```
ImportError: To be able to use bigbench, you need to install the following dependencies:
- 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"'
```
And this is how it looked before:
```
ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4475/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4475",
"html_url": "https://github.com/huggingface/datasets/pull/4475",
"diff_url": "https://github.com/huggingface/datasets/pull/4475.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4475.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4474/comments | https://api.github.com/repos/huggingface/datasets/issues/4474/events | https://github.com/huggingface/datasets/pull/4474 | 1,267,767,541 | PR_kwDODunzps45en98 | 4,474 | [Docs] How to use with PyTorch page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T16:25:49 | 2022-06-14T14:40:32 | 2022-06-14T14:04:33 | MEMBER | null | Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :)
cc @Rocketknight1 we can try to align both documentations contents now I think
cc @stevhliu let me know what you think ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4474/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4474",
"html_url": "https://github.com/huggingface/datasets/pull/4474",
"diff_url": "https://github.com/huggingface/datasets/pull/4474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4474.patch",
"merged_at": "2022-06-14T14:04:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4473/comments | https://api.github.com/repos/huggingface/datasets/issues/4473/events | https://github.com/huggingface/datasets/pull/4473 | 1,267,555,994 | PR_kwDODunzps45d5-R | 4,473 | Add SST-2 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"on the hub this dataset is referenced as `sst-2` not `sst2` β is there a canonical orthography? If not, could we name it `sst-2`?",
"@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2",
"Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already β but they're less popular: https://huggingface.co/models?datasets=sst2)",
"OK, I'm taking care of the subsequent PRs on models to align with this dataset name."
] | 2022-06-10T13:37:26 | 2022-06-13T14:11:34 | 2022-06-13T14:01:09 | MEMBER | null | Add SST-2 dataset.
Currently it is part of GLUE benchmark.
This PR adds it as a standalone dataset.
CC: @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4473/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4473",
"html_url": "https://github.com/huggingface/datasets/pull/4473",
"diff_url": "https://github.com/huggingface/datasets/pull/4473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4473.patch",
"merged_at": "2022-06-13T14:01:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4472/comments | https://api.github.com/repos/huggingface/datasets/issues/4472/events | https://github.com/huggingface/datasets/pull/4472 | 1,267,488,523 | PR_kwDODunzps45drcb | 4,472 | Fix 401 error for unauthticated requests to non-existing repos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T12:38:11 | 2022-06-10T13:05:11 | 2022-06-10T12:55:57 | MEMBER | null | The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos.
This PR add support for the 401 error and fixes the CI fails on `master` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4472/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4472/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4472",
"html_url": "https://github.com/huggingface/datasets/pull/4472",
"diff_url": "https://github.com/huggingface/datasets/pull/4472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4472.patch",
"merged_at": "2022-06-10T12:55:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4471/comments | https://api.github.com/repos/huggingface/datasets/issues/4471/events | https://github.com/huggingface/datasets/issues/4471 | 1,267,475,268 | I_kwDODunzps5LjCNE | 4,471 | CI error with repo lhoestq/_dummy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"fixed by https://github.com/huggingface/datasets/pull/4472"
] | 2022-06-10T12:26:06 | 2022-06-10T13:24:53 | 2022-06-10T13:24:53 | MEMBER | null | ## Describe the bug
CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269
```
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true
```
The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy
```
error: "Repository not found"
```
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4471/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4470/comments | https://api.github.com/repos/huggingface/datasets/issues/4470/events | https://github.com/huggingface/datasets/pull/4470 | 1,267,470,051 | PR_kwDODunzps45dnYw | 4,470 | Reorder returned validation/test splits in script template | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T12:21:13 | 2022-06-10T18:04:10 | 2022-06-10T17:54:50 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4470/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4470",
"html_url": "https://github.com/huggingface/datasets/pull/4470",
"diff_url": "https://github.com/huggingface/datasets/pull/4470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4470.patch",
"merged_at": "2022-06-10T17:54:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4469/comments | https://api.github.com/repos/huggingface/datasets/issues/4469/events | https://github.com/huggingface/datasets/pull/4469 | 1,267,213,849 | PR_kwDODunzps45cweQ | 4,469 | Replace data URLs in wider_face dataset once hosted on the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T08:13:25 | 2022-06-10T16:42:08 | 2022-06-10T16:32:46 | MEMBER | null | This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub.
They also informed us that their dataset is licensed under CC BY-NC-ND. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4469/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4469",
"html_url": "https://github.com/huggingface/datasets/pull/4469",
"diff_url": "https://github.com/huggingface/datasets/pull/4469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4469.patch",
"merged_at": "2022-06-10T16:32:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4468/comments | https://api.github.com/repos/huggingface/datasets/issues/4468/events | https://github.com/huggingface/datasets/pull/4468 | 1,266,715,742 | PR_kwDODunzps45bERK | 4,468 | Generalize tutorials for audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-09T22:00:44 | 2022-06-14T16:22:02 | 2022-06-14T16:12:00 | MEMBER | null | This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset.
Other changes include:
- Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder.
- Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library.
- Renamed some tutorials in the TOC to be more clear and specific.
- Added more text to nudge users towards joining the community and asking questions on the forums.
- If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4468/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4468",
"html_url": "https://github.com/huggingface/datasets/pull/4468",
"diff_url": "https://github.com/huggingface/datasets/pull/4468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4468.patch",
"merged_at": "2022-06-14T16:12:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4467/comments | https://api.github.com/repos/huggingface/datasets/issues/4467/events | https://github.com/huggingface/datasets/issues/4467 | 1,266,218,358 | I_kwDODunzps5LePV2 | 4,467 | Transcript string 'null' converted to [None] by load_dataset() | {
"login": "mbarnig",
"id": 1360633,
"node_id": "MDQ6VXNlcjEzNjA2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1360633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbarnig",
"html_url": "https://github.com/mbarnig",
"followers_url": "https://api.github.com/users/mbarnig/followers",
"following_url": "https://api.github.com/users/mbarnig/following{/other_user}",
"gists_url": "https://api.github.com/users/mbarnig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbarnig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbarnig/subscriptions",
"organizations_url": "https://api.github.com/users/mbarnig/orgs",
"repos_url": "https://api.github.com/users/mbarnig/repos",
"events_url": "https://api.github.com/users/mbarnig/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbarnig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\nββ, β#N/Aβ, β#N/A N/Aβ, β#NAβ, β-1.#INDβ, β-1.#QNANβ, β-NaNβ, β-nanβ, β1.#INDβ, β1.#QNANβ, β<NA>β, βN/Aβ, βNAβ, βNULLβ, βNaNβ, βn/aβ, βnanβ, βnullβ.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```",
"Thanks for the quick answer.",
"@albertvillanova I also ran into this issue, it had me scratching my head for a while! In my case it was tripped by a literal \"NA\" comment collected from a user-facing form (e.g., this question does not apply to me). Thankfully this answer was here, but I feel it is such a common trap that it deserves to be noted in the official docs, maybe [here](https://huggingface.co/docs/datasets/loading#csv)? \r\n\r\nI'm happy to submit a PR if you agree!"
] | 2022-06-09T14:26:00 | 2023-07-04T02:18:39 | 2022-06-09T16:29:02 | NONE | null | ## Issue
I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script
`ds_train1 = mydataset.map(prepare_dataset)`
the following error was issued:
```
ValueError Traceback (most recent call last)
<ipython-input-69-1e8f2b37f5bc> in <module>()
----> 1 ds_train = mydataset_train.map(prepare_dataset)
11 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2450 if not _is_valid_text_input(text):
2451 raise ValueError(
-> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) "
2453 "or List[List[str]] (batch of pretokenized examples)."
2454 )
ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples).
```
Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine.
## Expected result:
transcription 'null' interpreted as 'str' instead of 'None'.
## Reproduction
Here is the code to reproduce the error with a one-row-dataset.
```
with open("null-test.csv") as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
['wav_filename', 'wav_filesize', 'transcript']
['wavs/female/NULL1.wav', '17530', 'null']
```
dataset = load_dataset('csv', data_files={'train': 'null-test.csv'})
```
Using custom data configuration default-81ac0c0e27af3514
Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%
1/1 [00:00<00:00, 29.55it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 23.66it/s]
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%
1/1 [00:00<00:00, 25.84it/s]
```
print(dataset['train']['transcript'])
```
[None]
## Environment info
```
!pip install datasets==2.2.2
!pip install transformers==4.19.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4467/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4466/comments | https://api.github.com/repos/huggingface/datasets/issues/4466/events | https://github.com/huggingface/datasets/pull/4466 | 1,266,159,920 | PR_kwDODunzps45ZLsd | 4,466 | Optimize contiguous shard and select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n",
"I addressed your comments @albertvillanova , let me know what you think :)"
] | 2022-06-09T13:45:39 | 2022-06-14T16:04:30 | 2022-06-14T15:54:45 | MEMBER | null | Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular:
- the shard/select operation will be much faster
- reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping
Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations
Here is an example of speed-up:
```python
>>> import io
>>> import numpy as np
>>> from datasets import Dataset
>>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)})
>>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))`
>>> buf = io.BytesIO()
>>> %time dd.to_json(buf)
Creating json from Arrow format: 100%|ββββββββββββββββββ| 100/100 [00:00<00:00, 376.17ba/s]
CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms
Wall time: 266 ms
```
while previously it was
```python
Creating json from Arrow format: 100%|βββββββββββββββββββ| 100/100 [00:03<00:00, 29.41ba/s]
CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s
Wall time: 3.4 s
```
In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON.
## Implementation details
I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities:
- if the indices is of type `range`, it checks that start >= 0 and step = 1
- otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping.
Having to iterate over the indices doesn't cause performance issues IMO because:
- either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping
- or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4466/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4466",
"html_url": "https://github.com/huggingface/datasets/pull/4466",
"diff_url": "https://github.com/huggingface/datasets/pull/4466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4466.patch",
"merged_at": "2022-06-14T15:54:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4465/comments | https://api.github.com/repos/huggingface/datasets/issues/4465/events | https://github.com/huggingface/datasets/pull/4465 | 1,265,754,479 | PR_kwDODunzps45X0XY | 4,465 | Fix bigbench config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-09T08:06:19 | 2022-06-09T14:38:36 | 2022-06-09T14:29:19 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4465/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4465",
"html_url": "https://github.com/huggingface/datasets/pull/4465",
"diff_url": "https://github.com/huggingface/datasets/pull/4465.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4465.patch",
"merged_at": "2022-06-09T14:29:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4464/comments | https://api.github.com/repos/huggingface/datasets/issues/4464/events | https://github.com/huggingface/datasets/pull/4464 | 1,265,682,931 | PR_kwDODunzps45XlWW | 4,464 | Extend support for streaming datasets that use xml.dom.minidom.parse | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-09T06:58:25 | 2022-06-09T08:43:24 | 2022-06-09T08:34:16 | MEMBER | null | This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.
This PR adds support for streaming datasets like "Yaxin/SemEval2015".
Fix #4453. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4464/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4464",
"html_url": "https://github.com/huggingface/datasets/pull/4464",
"diff_url": "https://github.com/huggingface/datasets/pull/4464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4464.patch",
"merged_at": "2022-06-09T08:34:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4463/comments | https://api.github.com/repos/huggingface/datasets/issues/4463/events | https://github.com/huggingface/datasets/pull/4463 | 1,265,093,211 | PR_kwDODunzps45Vnzu | 4,463 | Use config_id to check split sizes instead of config name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4465"
] | 2022-06-08T17:45:24 | 2022-06-09T08:15:43 | 2022-06-09T08:06:37 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/4462 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4463/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4463",
"html_url": "https://github.com/huggingface/datasets/pull/4463",
"diff_url": "https://github.com/huggingface/datasets/pull/4463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4463.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4462/comments | https://api.github.com/repos/huggingface/datasets/issues/4462/events | https://github.com/huggingface/datasets/issues/4462 | 1,265,079,347 | I_kwDODunzps5LZ5Qz | 4,462 | BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Why not adding `max_examples` as part of the config name?",
"Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463",
"Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortunately, after trying the fix in #4465 I still see the same issue.\r\n\r\nI think there is some subtlety where the config name gets overwritten somewhere when `BUILDER_CONFIGS`[(link)](https://github.com/huggingface/datasets/blob/master/datasets/bigbench/bigbench.py#L126) is defined. \r\n\r\nIf I print out the `self.config.name` in the current version (with the fix in #4465), I see just the task name, but if I comment out `BUILDER_CONFIGS`, the `num_shots` and `max_examples` gets appended as was meant by #4465.\r\n\r\nI haven't managed to track down where this happens, but I thought you might know? \r\n\r\n(Another comment on your fix: the `name` variable is used to fetch the task from the bigbench API, so modifying it causes an error if it's actually called. This can easily be fixed by having `config_name` variable in addition to the `task_name`)\r\n\r\n\r\n"
] | 2022-06-08T17:31:24 | 2022-07-05T07:39:55 | null | MEMBER | null | As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`.
This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4462/timeline | null | reopened | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4461/comments | https://api.github.com/repos/huggingface/datasets/issues/4461/events | https://github.com/huggingface/datasets/issues/4461 | 1,264,800,451 | I_kwDODunzps5LY1LD | 4,461 | AttributeError: module 'datasets' has no attribute 'load_dataset' | {
"login": "AlexNLP",
"id": 59248970,
"node_id": "MDQ6VXNlcjU5MjQ4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexNLP",
"html_url": "https://github.com/AlexNLP",
"followers_url": "https://api.github.com/users/AlexNLP/followers",
"following_url": "https://api.github.com/users/AlexNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions",
"organizations_url": "https://api.github.com/users/AlexNLP/orgs",
"repos_url": "https://api.github.com/users/AlexNLP/repos",
"events_url": "https://api.github.com/users/AlexNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I'm having the same issueοΌCan you tell me how to solve itοΌ",
"I have the same issue, can you tell me how to solve it? Thanks"
] | 2022-06-08T13:59:20 | 2023-07-28T08:12:22 | 2022-06-08T14:41:00 | NONE | null | ## Describe the bug
I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric.
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4461/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4461/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4460/comments | https://api.github.com/repos/huggingface/datasets/issues/4460/events | https://github.com/huggingface/datasets/pull/4460 | 1,264,644,205 | PR_kwDODunzps45UHIs | 4,460 | Drop Python 3.6 support | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I've disabled the `test_dummy_dataset_serialize_s3` tests in the Linux CI to avoid the failures (these tests only fail on Windows in 3.6). These failures are unrelated to this PR's changes, and I would like to address this in a new PR.",
"[This comment](https://github.com/pytorch/audio/issues/2363#issuecomment-1179089175) explains the issue with MP3 decoding in `torchaudio` in the latest release (supports Python 3.7+). I fixed CI by pinning `torchaudio` to `<0.12.0`. Another way to fix this issue is by installing `ffmpeg` with conda or using the unofficial GH action. But I don't think it's worth making CI more complex, considering we can wait for the soundfile release, which should bring MP3 decoding, and drop the `torchaudio` dependency then.",
"Yay for dropping Python 3.6!",
"I think we can merge in this state. Also, if an env has Python version < 3.7 installed, we raise a warning, so I don't think we even need to create (and pin) an issue to notify the contributors of this change."
] | 2022-06-08T12:10:18 | 2022-07-26T19:16:39 | 2022-07-26T19:04:21 | CONTRIBUTOR | null | Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4460/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4460/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4460",
"html_url": "https://github.com/huggingface/datasets/pull/4460",
"diff_url": "https://github.com/huggingface/datasets/pull/4460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4460.patch",
"merged_at": "2022-07-26T19:04:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4459/comments | https://api.github.com/repos/huggingface/datasets/issues/4459/events | https://github.com/huggingface/datasets/pull/4459 | 1,264,636,481 | PR_kwDODunzps45UFc8 | 4,459 | Add and fix language tags for udhr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-08T12:03:42 | 2022-06-08T12:36:24 | 2022-06-08T12:27:13 | MEMBER | null | Related to #4362. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4459/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4459",
"html_url": "https://github.com/huggingface/datasets/pull/4459",
"diff_url": "https://github.com/huggingface/datasets/pull/4459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4459.patch",
"merged_at": "2022-06-08T12:27:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4457/comments | https://api.github.com/repos/huggingface/datasets/issues/4457/events | https://github.com/huggingface/datasets/pull/4457 | 1,263,531,911 | PR_kwDODunzps45QZCU | 4,457 | First draft of the docs for TF + Datasets | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some links are still missing I think :)",
"This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.",
"Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release"
] | 2022-06-07T16:06:48 | 2022-06-14T16:08:41 | 2022-06-14T15:59:08 | MEMBER | null | I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4457/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4457/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4457",
"html_url": "https://github.com/huggingface/datasets/pull/4457",
"diff_url": "https://github.com/huggingface/datasets/pull/4457.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4457.patch",
"merged_at": "2022-06-14T15:59:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4456/comments | https://api.github.com/repos/huggingface/datasets/issues/4456/events | https://github.com/huggingface/datasets/issues/4456 | 1,263,241,449 | I_kwDODunzps5LS4jp | 4,456 | Workflow for Tabular data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for later. (it's just UX improvement) ",
"is very slow batch access of a dataset (tabular, csv) with many columns to be expected?",
"Define \"many\" ? x)",
"~20k! I was surprised batch loading with as few as 32 samples was really slow. I was speculating the columnar format was the cause -- or do you see good performance with this approx size of tabular data?",
"20k can be a lot for a columnar format but maybe we can optimize a few things.\r\n\r\nIt would be cool to profile the code to see if there's an unoptimized part of the code that slows everything down.\r\n\r\n(it's also possible to kill the job when it accesses the batch, it often gives you the traceback at the location where the code was running)",
"FWIW I've worked with tabular data with 540k columns.",
"thats awesome, whats your secret? would love to see an example!",
"@wconnell I'm not sure what you mean by my secret, I load them into a numpy array π \r\n\r\nAn example dataset is [here](https://portal.gdc.cancer.gov/repository?facetTab=files&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-CESC%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.data_category%22%2C%22value%22%3A%5B%22DNA%20Methylation%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=files) which is a dataset of DNA methylation reads. This dataset is about 950 rows and 450k columns. "
] | 2022-06-07T12:48:22 | 2023-03-06T08:53:55 | null | MEMBER | null | Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal.
For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model.
In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y.
Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data:
- be able to load the data into X and y
- be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.)
- support "unsplit" datasets explicitly, instead of putting everything in "train" by default
cc @adrinjalali @merveenoyan feel free to complete/correct this :)
Feel free to also share ideas of APIs that would be super intuitive in your opinion ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/4456/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4455/comments | https://api.github.com/repos/huggingface/datasets/issues/4455/events | https://github.com/huggingface/datasets/pull/4455 | 1,263,089,067 | PR_kwDODunzps45O5F9 | 4,455 | Update data URLs in fever dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-07T10:40:54 | 2022-06-08T07:24:54 | 2022-06-08T07:16:17 | MEMBER | null | As stated in their website, data owners updated their URLs on 28/04/2022.
This PR updates the data URLs.
Fix #4452. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4455/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4455",
"html_url": "https://github.com/huggingface/datasets/pull/4455",
"diff_url": "https://github.com/huggingface/datasets/pull/4455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4455.patch",
"merged_at": "2022-06-08T07:16:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4454/comments | https://api.github.com/repos/huggingface/datasets/issues/4454/events | https://github.com/huggingface/datasets/issues/4454 | 1,262,674,973 | I_kwDODunzps5LQuQd | 4,454 | Dataset Viewer issue for Yaxin/SemEval2015 | {
"login": "WithYouTo",
"id": 18160852,
"node_id": "MDQ6VXNlcjE4MTYwODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WithYouTo",
"html_url": "https://github.com/WithYouTo",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions",
"organizations_url": "https://api.github.com/users/WithYouTo/orgs",
"repos_url": "https://api.github.com/users/WithYouTo/repos",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"received_events_url": "https://api.github.com/users/WithYouTo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Closing since it's a duplicate of https://github.com/huggingface/datasets/issues/4453"
] | 2022-06-07T03:31:46 | 2022-06-07T11:53:11 | 2022-06-07T11:53:11 | NONE | null | ### Link
_No response_
### Description
the link could not visit
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4454/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4453/comments | https://api.github.com/repos/huggingface/datasets/issues/4453/events | https://github.com/huggingface/datasets/issues/4453 | 1,262,674,105 | I_kwDODunzps5LQuC5 | 4,453 | Dataset Viewer issue for Yaxin/SemEval2015 | {
"login": "WithYouTo",
"id": 18160852,
"node_id": "MDQ6VXNlcjE4MTYwODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WithYouTo",
"html_url": "https://github.com/WithYouTo",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions",
"organizations_url": "https://api.github.com/users/WithYouTo/orgs",
"repos_url": "https://api.github.com/users/WithYouTo/repos",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"received_events_url": "https://api.github.com/users/WithYouTo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
"`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !",
"Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."
] | 2022-06-07T03:30:08 | 2022-06-09T08:34:16 | 2022-06-09T08:34:16 | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4453/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4452/comments | https://api.github.com/repos/huggingface/datasets/issues/4452/events | https://github.com/huggingface/datasets/issues/4452 | 1,262,529,654 | I_kwDODunzps5LQKx2 | 4,452 | Trying to load FEVER dataset results in NonMatchingChecksumError | {
"login": "santhnm2",
"id": 5347982,
"node_id": "MDQ6VXNlcjUzNDc5ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5347982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhnm2",
"html_url": "https://github.com/santhnm2",
"followers_url": "https://api.github.com/users/santhnm2/followers",
"following_url": "https://api.github.com/users/santhnm2/following{/other_user}",
"gists_url": "https://api.github.com/users/santhnm2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhnm2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhnm2/subscriptions",
"organizations_url": "https://api.github.com/users/santhnm2/orgs",
"repos_url": "https://api.github.com/users/santhnm2/repos",
"events_url": "https://api.github.com/users/santhnm2/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhnm2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work).",
"Hello! Is there any update on this? I am having the same issue 6 months later."
] | 2022-06-06T23:13:15 | 2022-12-15T13:36:40 | 2022-06-08T07:16:16 | NONE | null | ## Describe the bug
Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`.
I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError
dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError
dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError
```
## Expected results
I expect this call to return with no error raised.
## Actual results
With `ignore_verification=False`:
```
*** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl']
```
With `ignore_verification=True`:
```
*** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.3.dev0
- Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4452/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4451/comments | https://api.github.com/repos/huggingface/datasets/issues/4451/events | https://github.com/huggingface/datasets/pull/4451 | 1,262,103,323 | PR_kwDODunzps45LkGc | 4,451 | Use newer version of multi-news with fixes | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome thanks @mariosasko!"
] | 2022-06-06T16:57:08 | 2022-06-07T17:40:01 | 2022-06-07T17:14:44 | CONTRIBUTOR | null | Closes #4430. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4451/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4451",
"html_url": "https://github.com/huggingface/datasets/pull/4451",
"diff_url": "https://github.com/huggingface/datasets/pull/4451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4451.patch",
"merged_at": "2022-06-07T17:14:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4450/comments | https://api.github.com/repos/huggingface/datasets/issues/4450/events | https://github.com/huggingface/datasets/pull/4450 | 1,261,878,324 | PR_kwDODunzps45Kzwh | 4,450 | Update README.md of fquad | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-06T13:52:41 | 2022-06-06T14:51:49 | 2022-06-06T14:43:03 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4450/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4450",
"html_url": "https://github.com/huggingface/datasets/pull/4450",
"diff_url": "https://github.com/huggingface/datasets/pull/4450.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4450.patch",
"merged_at": "2022-06-06T14:43:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4449/comments | https://api.github.com/repos/huggingface/datasets/issues/4449/events | https://github.com/huggingface/datasets/issues/4449 | 1,261,262,326 | I_kwDODunzps5LLVX2 | 4,449 | Rj | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-06-06T02:24:32 | 2022-06-06T15:44:50 | 2022-06-06T15:44:50 | NONE | null | import android.content.DialogInterface;
import android.database.Cursor;
import android.os.Bundle;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
import androidx.appcompat.app.AlertDialog;
import androidx.appcompat.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity {
private EditText editTextID;
private EditText editTextName;
private EditText editTextNum;
private String name;
private int number;
private String ID;
private dbHelper db;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
db = new dbHelper(this);
editTextID = findViewById(R.id.editText1);
editTextName = findViewById(R.id.editText2);
editTextNum = findViewById(R.id.editText3);
Button buttonSave = findViewById(R.id.button);
Button buttonRead = findViewById(R.id.button2);
Button buttonUpdate = findViewById(R.id.button3);
Button buttonDelete = findViewById(R.id.button4);
Button buttonSearch = findViewById(R.id.button5);
Button buttonDeleteAll = findViewById(R.id.button6);
buttonSave.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
name = editTextName.getText().toString();
String num = editTextNum.getText().toString();
if (name.isEmpty() || num.isEmpty()) {
Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show();
} else {
number = Integer.parseInt(num);
try {
// Insert Data
db.insertData(name, number);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1);
String name;
String num;
String id;
try {
Cursor cursor = db.readData();
if (cursor != null && cursor.getCount() > 0) {
while (cursor.moveToNext()) {
id = cursor.getString(0); // get data in column index 0
name = cursor.getString(1); // get data in column index 1
num = cursor.getString(2); // get data in column index 2
// Add SQLite data to listView
adapter.add("ID :- " + id + "\n" +
"Name :- " + name + "\n" +
"Number :- " + num + "\n\n");
}
} else {
adapter.add("No Data");
}
cursor.close();
} catch (Exception e) {
e.printStackTrace();
}
// show the saved data in alertDialog
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("SQLite saved data");
builder.setIcon(R.mipmap.app_icon_foreground);
builder.setAdapter(adapter, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
}
});
builder.setPositiveButton("OK", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
});
buttonUpdate.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
name = editTextName.getText().toString();
String num = editTextNum.getText().toString();
ID = editTextID.getText().toString();
if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show();
} else {
number = Integer.parseInt(num);
try {
// Update Data
db.updateData(ID, name, number);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonDelete.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ID = editTextID.getText().toString();
if (ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show();
} else {
try {
// Delete Data
db.deleteData(ID);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonDeleteAll.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// Delete all data
// You can simply delete all the data by calling this method --> db.deleteAllData();
// You can try this also
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setIcon(R.mipmap.app_icon_foreground);
builder.setTitle("Delete All Data");
builder.setCancelable(false);
builder.setMessage("Do you really need to delete your all data ?");
builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// User confirmed , now you can delete the data
db.deleteAllData();
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
}
});
builder.setNegativeButton("No", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// user not confirmed
dialog.cancel();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
});
buttonSearch.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ID = editTextID.getText().toString();
if (ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show();
} else {
try {
// Search data
Cursor cursor = db.searchData(ID);
if (cursor.moveToFirst()) {
editTextName.setText(cursor.getString(1));
editTextNum.setText(cursor.getString(2));
Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show();
editTextNum.setText("ID Not found");
editTextName.setText("ID not found");
}
cursor.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
}
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4449/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4448/comments | https://api.github.com/repos/huggingface/datasets/issues/4448/events | https://github.com/huggingface/datasets/issues/4448 | 1,260,966,129 | I_kwDODunzps5LKNDx | 4,448 | New Preprocessing Feature - Deduplication [Request] | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https://github.com/huggingface/datasets/issues/2514)",
"Here is an example using the [datasets_sql](https://github.com/mariosasko/datasets_sql) mentioned \r\n\r\n```python \r\nfrom datasets_sql import query\r\n\r\ndataset = load_dataset(\"imdb\", split=\"train\")\r\n\r\n# If you dont have an id column just add one by enumerating\r\ndataset=dataset.map(lambda x,i: {\"id\":i}, with_indices=True)\r\n\r\nid_column='id'\r\nunique_column='text'\r\n\r\n# always selects min id\r\nunique_dataset = query(f\"SELECT dataset.* FROM dataset JOIN (SELECT MIN({id_column}) as unique_id FROM dataset group by {unique_column}) ON unique_id=dataset.{id_column}\")\r\n```\r\nNot ideal for large datasets but good enough for basic cases.\r\nSure would be nice to have in the library π€ "
] | 2022-06-05T05:32:56 | 2023-03-08T17:38:37 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.
A feature that allows one to easily deduplicate a dataset can be cool!
**Describe the solution you'd like**
We can define a function and keep only the first/last data-point that yields the value according to this function.
**Describe alternatives you've considered**
The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4448/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4447/comments | https://api.github.com/repos/huggingface/datasets/issues/4447/events | https://github.com/huggingface/datasets/pull/4447 | 1,260,041,805 | PR_kwDODunzps45E4A- | 4,447 | Minor fixes/improvements in `scene_parse_150` card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-03T15:22:34 | 2022-06-06T15:50:25 | 2022-06-06T15:41:37 | CONTRIBUTOR | null | Add `paperswithcode_id` and fix some links in the `scene_parse_150` card. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4447/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4447",
"html_url": "https://github.com/huggingface/datasets/pull/4447",
"diff_url": "https://github.com/huggingface/datasets/pull/4447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4447.patch",
"merged_at": "2022-06-06T15:41:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4446/comments | https://api.github.com/repos/huggingface/datasets/issues/4446/events | https://github.com/huggingface/datasets/pull/4446 | 1,260,028,995 | PR_kwDODunzps45E1Qb | 4,446 | Add missing kwargs to docstrings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-03T15:10:27 | 2022-06-03T16:10:09 | 2022-06-03T16:01:29 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4446/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4446",
"html_url": "https://github.com/huggingface/datasets/pull/4446",
"diff_url": "https://github.com/huggingface/datasets/pull/4446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4446.patch",
"merged_at": "2022-06-03T16:01:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4445/comments | https://api.github.com/repos/huggingface/datasets/issues/4445/events | https://github.com/huggingface/datasets/pull/4445 | 1,259,947,568 | PR_kwDODunzps45EjtA | 4,445 | Fix missing args in docstring of load_dataset_builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-03T13:55:50 | 2022-06-03T14:35:32 | 2022-06-03T14:27:09 | MEMBER | null | Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other):
- https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4445/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4445",
"html_url": "https://github.com/huggingface/datasets/pull/4445",
"diff_url": "https://github.com/huggingface/datasets/pull/4445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4445.patch",
"merged_at": "2022-06-03T14:27:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4444/comments | https://api.github.com/repos/huggingface/datasets/issues/4444/events | https://github.com/huggingface/datasets/pull/4444 | 1,259,738,209 | PR_kwDODunzps45D2XX | 4,444 | Fix kwargs in docstrings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-03T10:29:02 | 2022-06-03T11:01:28 | 2022-06-03T10:52:46 | MEMBER | null | To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards.
See:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4444/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4444",
"html_url": "https://github.com/huggingface/datasets/pull/4444",
"diff_url": "https://github.com/huggingface/datasets/pull/4444.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4444.patch",
"merged_at": "2022-06-03T10:52:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4443/comments | https://api.github.com/repos/huggingface/datasets/issues/4443/events | https://github.com/huggingface/datasets/issues/4443 | 1,259,606,334 | I_kwDODunzps5LFBE- | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | {
"login": "ZYMXIXI",
"id": 32382826,
"node_id": "MDQ6VXNlcjMyMzgyODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZYMXIXI",
"html_url": "https://github.com/ZYMXIXI",
"followers_url": "https://api.github.com/users/ZYMXIXI/followers",
"following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions",
"organizations_url": "https://api.github.com/users/ZYMXIXI/orgs",
"repos_url": "https://api.github.com/users/ZYMXIXI/repos",
"events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZYMXIXI/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.",
"I'm having a look.",
"Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```",
"Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km",
"Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ",
"I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker "
] | 2022-06-03T08:17:16 | 2022-06-07T08:23:52 | null | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4443/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4442/comments | https://api.github.com/repos/huggingface/datasets/issues/4442/events | https://github.com/huggingface/datasets/issues/4442 | 1,258,589,276 | I_kwDODunzps5LBIxc | 4,442 | Dataset Viewer issue for amazon_polarity | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks, looking at it",
"Not sure what happened π¬, but it's fixed"
] | 2022-06-02T19:18:38 | 2022-06-07T18:50:37 | 2022-06-07T18:50:37 | MEMBER | null | ### Link
https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test
### Description
For some reason the train split is OK but the test split is not for this dataset:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4442/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4441/comments | https://api.github.com/repos/huggingface/datasets/issues/4441/events | https://github.com/huggingface/datasets/issues/4441 | 1,258,568,656 | I_kwDODunzps5LBDvQ | 4,441 | Dataset Viewer issue for aeslc | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Not sure what happened π¬, but it's fixed"
] | 2022-06-02T18:57:12 | 2022-06-07T18:50:55 | 2022-06-07T18:50:55 | MEMBER | null | ### Link
https://huggingface.co/datasets/aeslc
### Description
The dataset viewer can't find `dataset_infos.json` in it's cache:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4441/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4440/comments | https://api.github.com/repos/huggingface/datasets/issues/4440/events | https://github.com/huggingface/datasets/pull/4440 | 1,258,494,469 | PR_kwDODunzps44_io_ | 4,440 | Update docs around audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?\r\n\r\nWe plan to address this with end-to-end examples (for each modality) more focused on preprocessing than the ones in the Transformers docs."
] | 2022-06-02T17:42:03 | 2022-06-23T16:33:19 | 2022-06-23T16:23:02 | MEMBER | null | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.
Other changes include:
- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.
- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).
- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.
- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.
Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4440/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4440",
"html_url": "https://github.com/huggingface/datasets/pull/4440",
"diff_url": "https://github.com/huggingface/datasets/pull/4440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4440.patch",
"merged_at": "2022-06-23T16:23:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4439/comments | https://api.github.com/repos/huggingface/datasets/issues/4439/events | https://github.com/huggingface/datasets/issues/4439 | 1,258,434,111 | I_kwDODunzps5LAi4_ | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | {
"login": "drscotthawley",
"id": 13925685,
"node_id": "MDQ6VXNlcjEzOTI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drscotthawley",
"html_url": "https://github.com/drscotthawley",
"followers_url": "https://api.github.com/users/drscotthawley/followers",
"following_url": "https://api.github.com/users/drscotthawley/following{/other_user}",
"gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions",
"organizations_url": "https://api.github.com/users/drscotthawley/orgs",
"repos_url": "https://api.github.com/users/drscotthawley/repos",
"events_url": "https://api.github.com/users/drscotthawley/events{/privacy}",
"received_events_url": "https://api.github.com/users/drscotthawley/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436",
"Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n",
"I'm closing this issue then. Please, feel free to reopen it again if the problem persists."
] | 2022-06-02T16:35:56 | 2022-06-03T08:44:17 | 2022-06-03T08:44:16 | NONE | null | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:
## Steps to reproduce the bug
```python
data = load_dataset('timit_asr', 'clean')['train']
```
## Expected results
The dataset should load with no errors.
## Actual results
This error message:
```
File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place?
The files in the dataset look like the following:
```
Β³ PHONCODE.DOC
Β³ PROMPTS.TXT
Β³ SPKRINFO.TXT
Β³ SPKRSENT.TXT
Β³ TESTSET.DOC
```
...so why are these being excluded by the dataset loader?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4439/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4438/comments | https://api.github.com/repos/huggingface/datasets/issues/4438/events | https://github.com/huggingface/datasets/pull/4438 | 1,258,255,394 | PR_kwDODunzps44-vhC | 4,438 | Fix docstring of inspect_dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-02T14:21:10 | 2022-06-02T16:40:55 | 2022-06-02T16:32:27 | MEMBER | null | As pointed out by @sgugger:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4438/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4438",
"html_url": "https://github.com/huggingface/datasets/pull/4438",
"diff_url": "https://github.com/huggingface/datasets/pull/4438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4438.patch",
"merged_at": "2022-06-02T16:32:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4437/comments | https://api.github.com/repos/huggingface/datasets/issues/4437/events | https://github.com/huggingface/datasets/pull/4437 | 1,258,249,582 | PR_kwDODunzps44-uRW | 4,437 | Add missing columns to `blended_skill_talk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-02T14:16:26 | 2022-06-06T15:49:56 | 2022-06-06T15:41:25 | CONTRIBUTOR | null | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4437/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4437",
"html_url": "https://github.com/huggingface/datasets/pull/4437",
"diff_url": "https://github.com/huggingface/datasets/pull/4437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4437.patch",
"merged_at": "2022-06-06T15:41:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4436/comments | https://api.github.com/repos/huggingface/datasets/issues/4436/events | https://github.com/huggingface/datasets/pull/4436 | 1,257,758,834 | PR_kwDODunzps449FsU | 4,436 | Fix directory names for LDC data in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-02T06:45:04 | 2022-06-02T09:32:56 | 2022-06-02T09:24:27 | MEMBER | null | Related to:
- #4422 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4436/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4436",
"html_url": "https://github.com/huggingface/datasets/pull/4436",
"diff_url": "https://github.com/huggingface/datasets/pull/4436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4436.patch",
"merged_at": "2022-06-02T09:24:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4435/comments | https://api.github.com/repos/huggingface/datasets/issues/4435/events | https://github.com/huggingface/datasets/issues/4435 | 1,257,496,552 | I_kwDODunzps5K89_o | 4,435 | Load a local cached dataset that has been modified | {
"login": "mihail911",
"id": 2789441,
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihail911",
"html_url": "https://github.com/mihail911",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"repos_url": "https://api.github.com/users/mihail911/repos",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 2022-06-02T01:51:49 | 2022-06-02T23:59:26 | 2022-06-02T23:59:18 | NONE | null | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4435/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4434/comments | https://api.github.com/repos/huggingface/datasets/issues/4434/events | https://github.com/huggingface/datasets/pull/4434 | 1,256,207,321 | PR_kwDODunzps443mAr | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-06-01T14:53:15 | 2022-06-07T12:08:28 | 2022-06-07T09:24:09 | CONTRIBUTOR | null | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my code fix this issue in a proper way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4434/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4434",
"html_url": "https://github.com/huggingface/datasets/pull/4434",
"diff_url": "https://github.com/huggingface/datasets/pull/4434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4434.patch",
"merged_at": "2022-06-07T09:24:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added back the `[:]` and a comment to explain why this is needed. "
] | 2022-06-01T12:09:56 | 2022-06-09T10:34:54 | 2022-06-09T10:26:07 | CONTRIBUTOR | null | Fix #4348 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"merged_at": "2022-06-09T10:26:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-01T09:45:30 | 2022-06-02T17:43:47 | 2022-06-02T17:35:15 | MEMBER | null | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"merged_at": "2022-06-02T17:35:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well"
] | 2022-06-01T01:20:40 | 2022-06-11T12:40:23 | 2022-06-11T12:31:16 | CONTRIBUTOR | null | It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4430/comments | https://api.github.com/repos/huggingface/datasets/issues/4430/events | https://github.com/huggingface/datasets/issues/4430 | 1,254,412,591 | I_kwDODunzps5KxNEv | 4,430 | Add ability to load newer, cleaner version of Multi-News | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.",
"@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?",
"Yes! Let me help you with more detailed instructions.\r\n\r\nIn the first step, we need to update the URLs. One of the possible dictionary structures is as follows:\r\n```python\r\n_URLs = {\r\n \"train\": {\"src\": \"https://drive.google.com/uc?export=download&id=1wHAWDOwOoQWSj7HYpyJ3Aeud8WhhaJ7P\", \"tgt\": \"https://drive.google.com/uc?export=download&id=1QVgswwhVTkd3VLCzajK6eVkcrSWEK6kq\"}\r\n \"val\": ...\r\n \"test\": ...\r\n}\r\n```\r\n\r\n(You can use this page to generate direct download links: https://sites.google.com/site/gdocs2direct/)\r\n\r\nThen we move to the `split_generators` method:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n files = dl_manager.download(_URLs)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\"src_file\": files[\"train\"][\"src\"], \"tgt_file\": files[\"train\"][\"tgt\"]},\r\n ),\r\n ... # same for val and test\r\n ]\r\n```\r\nFinally, we adjust the signature of `_generate_examples`:\r\n```python\r\ndef _generate_examples(self, src_file, tgt_file):\r\n \"\"\"Yields examples.\"\"\"\r\n with open(src_file, encoding=\"utf-8\") as src_f, open(\r\n tgt_file, encoding=\"utf-8\"\r\n ) as tgt_f:\r\n ... # the rest is the same\r\n```\r\n\r\nAnd that's it!\r\n\r\nPS: Let me know if you need help updating the dummy data and regenerating the metadata file.",
"Awesome! Thanks for the detailed help, that was straightforward with your instruction. However, I think I am being blocked by this issue: https://github.com/huggingface/datasets/issues/4428",
"Feel free to open a PR, and I can fix this manually.",
"Awsome, done in #4451!"
] | 2022-05-31T21:00:44 | 2022-06-07T17:14:44 | 2022-06-07T17:14:44 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).
Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.
**Describe the solution you'd like**
Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.
**Describe alternatives you've considered**
Replace the current URL to the original version to the dataset with the URL to the version with fixes.
**Additional context**
Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4430/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 is investigating why deprecated/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder",
"@albertvillanova please check now π \r\nhttps://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder\r\n\r\n<img width=\"500\" alt=\"Screenshot 2022-06-06 at 10 20 34\" src=\"https://user-images.githubusercontent.com/11827707/172123471-fab97138-c903-4a71-ab7f-c90e5e43c58f.png\">\r\n",
"Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?",
"> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected π because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)"
] | 2022-05-31T17:37:25 | 2022-06-08T11:40:18 | 2022-06-08T11:31:45 | MEMBER | null | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"merged_at": "2022-06-08T11:31:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4428/comments | https://api.github.com/repos/huggingface/datasets/issues/4428/events | https://github.com/huggingface/datasets/issues/4428 | 1,254,092,818 | I_kwDODunzps5Kv_AS | 4,428 | Errors when building dummy data if you use nested _URLS | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-31T16:10:57 | 2022-06-07T09:24:09 | 2022-06-07T09:24:09 | CONTRIBUTOR | null | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run
self._autogenerate_dummy_data(
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data
dataset_builder._split_generators(dl_manager)
File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators
data_dir = dl_manager.download_and_extract(urls)
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract
dummy_output = self.mock_download_manager.download(url_or_urls)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download
return self.download_and_extract(data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract
return self.create_dummy_data_dict(dummy_file, data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict
if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
TypeError: unhashable type: 'list'
## Steps to reproduce the bug
You can use my dataset script implemented here:
https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py
```python
datasets_cli dummy_data datasets/personal_dialog --auto_generate
```
You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54
to
```
"train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz"
```
before runing the above script to avoid downloading a large training data.
## Expected results
The dummy data should be generated
## Actual results
An error is raised.
It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165
We only check if the first item of dummy_data_dict.values() is str.
However, dummy_data_dict.values() may have the type of [str, list, list].
A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to
```python
if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
```
But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4428/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4427/comments | https://api.github.com/repos/huggingface/datasets/issues/4427/events | https://github.com/huggingface/datasets/pull/4427 | 1,253,959,313 | PR_kwDODunzps44vyGg | 4,427 | Add HF.co for PRs/Issues for specific datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T14:31:21 | 2022-06-01T12:37:42 | 2022-06-01T12:29:02 | MEMBER | null | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4427/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4427",
"html_url": "https://github.com/huggingface/datasets/pull/4427",
"diff_url": "https://github.com/huggingface/datasets/pull/4427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4427.patch",
"merged_at": "2022-06-01T12:29:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] | 2022-05-31T13:40:16 | 2022-06-03T16:25:25 | 2022-06-03T16:25:25 | NONE | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T10:10:04 | 2022-06-01T14:15:30 | 2022-06-01T14:06:51 | MEMBER | null | Related to #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"merged_at": "2022-06-01T14:06:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4424/comments | https://api.github.com/repos/huggingface/datasets/issues/4424/events | https://github.com/huggingface/datasets/pull/4424 | 1,253,542,488 | PR_kwDODunzps44uZBD | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-31T08:47:45 | 2022-05-31T13:50:50 | 2022-05-31T13:42:31 | MEMBER | null | Fix #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4424/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4424",
"html_url": "https://github.com/huggingface/datasets/pull/4424",
"diff_url": "https://github.com/huggingface/datasets/pull/4424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4424.patch",
"merged_at": "2022-05-31T13:42:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! As for https://github.com/huggingface/datasets/pull/4431 please also update the licensing section in https://huggingface.co/datasets/silver/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR"
] | 2022-05-31T04:45:07 | 2022-06-11T12:40:52 | 2022-06-11T12:31:42 | CONTRIBUTOR | null | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4422/comments | https://api.github.com/repos/huggingface/datasets/issues/4422/events | https://github.com/huggingface/datasets/issues/4422 | 1,253,146,511 | I_kwDODunzps5KsX-P | 4,422 | Cannot load timit_asr data set | {
"login": "bhaddow",
"id": 992795,
"node_id": "MDQ6VXNlcjk5Mjc5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaddow",
"html_url": "https://github.com/bhaddow",
"followers_url": "https://api.github.com/users/bhaddow/followers",
"following_url": "https://api.github.com/users/bhaddow/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions",
"organizations_url": "https://api.github.com/users/bhaddow/orgs",
"repos_url": "https://api.github.com/users/bhaddow/repos",
"events_url": "https://api.github.com/users/bhaddow/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaddow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.",
"Thanks for the quick fix!",
"@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ",
"Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.",
"Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!",
"Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names."
] | 2022-05-30T22:00:22 | 2022-06-02T06:34:05 | 2022-05-31T13:42:31 | NONE | null | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug
```python
timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset")
# Sample code to reproduce the bug
```
## Expected results
The data set should load without error. It worked for me before the LDC url change.
## Actual results
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: SA1
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4422/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"login": "asivokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asivokon",
"html_url": "https://github.com/asivokon",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"repos_url": "https://api.github.com/users/asivokon/repos",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-30T19:19:40 | 2022-06-06T15:22:50 | 2022-06-06T15:22:50 | CONTRIBUTOR | null | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"merged_at": "2022-06-06T15:22:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4420/comments | https://api.github.com/repos/huggingface/datasets/issues/4420/events | https://github.com/huggingface/datasets/issues/4420 | 1,252,739,239 | I_kwDODunzps5Kq0in | 4,420 | Metric evaluation problems in multi-node, shared file system | {
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n",
"We've finally been able to isolate the problem, it wasn't a timing problem, but rather a file locking one. \r\nThe locks produced by calling `flock` where not visible between nodes (so the master node couldn't check other node's locks nor the other way around). \r\n\r\nWe are now having issues with the pre-processing in our runner script, but are not related with the rendezvous process during the evaluation phase. We will let you know about it once we address it. \r\n\r\nOur solution to the rendezvous is as follows:\r\n- We solved the problem by calling `lockf` instead of `flock`.\r\n- We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one). \r\n\r\nWe use a shared file system between nodes using GPFS in our cluster setup. Maybe the difference between the behavior we see with respect to your usage in multi-node executions comes from that fact. Which file system scheme do you use for the multi-node executions? \r\n\r\n`lockf` seems to work in more settings than `flock`, so maybe we could write a PR so you could test it in your environment. ",
"Cool, I'm glad you managed to make evaluation work :)\r\n\r\nI'm not completely aware of the differences between lockf and flock, but I've read somewhere that flock is preferable over lockf in multithreading and multiprocessing situations. Here we definitely are in such a situation so unless it is super important I don't think we will switch to lockf",
"> * We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one).\r\n\r\nHi @panserbjorn , Can you share your `_check_all_processes_locks` function? thanks!",
"```\r\ndef _check_all_processes_locks(self):\r\n expected_lock_file_names = [\r\n os.path.join(self.data_dir, f\"{self.experiment_id}-{self.num_process}-{process_id}.arrow.lock\")\r\n for process_id in range(self.num_process)\r\n ]\r\n #for expected_lock_file_name in expected_lock_file_names: # OUR CHANGE process 0 shouldn't check its own lock\r\n for expected_lock_file_name in expected_lock_file_names[1:]:\r\n nofilelock = FileFreeLock(expected_lock_file_name)\r\n try:\r\n nofilelock.acquire(timeout=self.timeout)\r\n except Timeout:\r\n raise ValueError(\r\n f\"Expected to find locked file {expected_lock_file_name} from process {self.process_id} but it doesn't exist.\"\r\n )\r\n else:\r\n nofilelock.release()\r\n```\r\n\r\n### Changed files:\r\n- metric.py file in the datasets library \r\n- filelock.py file in the datasets/utils library. \r\n\r\n\r\nChanges we made:\r\n\r\n1. We changed the flock for lockf \r\n flock and lockf both perform a lock over a file (like the lock for writing). \r\n The difference is that flock only works in local file systems, but if you have a shared file system (like what we have in the clusters) the flock fails to βseeβ the lock of another node. The only disadvantage we had was that a single process couldnβt detect itβs own lock so we did the second change.\r\n2. We prevented the process 0 (which is the one that coordinates the rendezvous) from checking its own lock on its arrow because it didn't work with lockf (as stated in the previous change). \r\n3. We made a second rendezvous so that all the process had the results of the metrics (other than the loss) and not only the process 0.\r\n What happened was that only process 0 computed the metric and that didnβt present any problem if you are using the loss. However, if you are using another metric, the only process which had the information to choose the best checkpoint at evaluation time was the process 0. But since the evaluation was performed over all processes, every process except the process 0 chose a bad check point (bad meaning it wasnβt the best one) because they didnβt have the information of the metric of the best checkpoint. \r\n The consequence was that the evaluation was different from what would result if using only the best checkpoint, because each process chose a different checkpoint to run the evaluation and thus the numbers were often worse than the numbers that would be obtained if all processes choose the best checkpoint (correct one) to perform the evaluation of their samples. \r\n We performed a second rendezvous so that all processes had the same best_metric and best_model as process 0 after the evaluation cycle. \r\n",
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] | 2022-05-30T13:24:05 | 2023-07-11T09:33:18 | 2023-07-11T09:33:17 | NONE | null | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
Specifically for the datasets, for the distributed setup the `load_metric` is called as:
```
process_id=int(os.environ["RANK"])
num_process=int(os.environ["WORLD_SIZE"])
eval_metrics = {metric: load_metric(metric,
process_id=process_id,
num_process=num_process,
experiment_id="slurm")
for metric in data_args.eval_metrics}
```
## Expected results
The training should not fail, due to the failure of the `Metric.compute()` step.
## Actual results
For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute
self.add_batch(**inputs)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch
self._init_writer()
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer
self._check_rendez_vous() # wait for master to be ready and to let everyone go
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous
) from None
ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.
```
When I look at the cache directory, I can see all the lock files in principle:
```
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock
```
I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- PyArrow version: 7.0.0
- Pandas version: 1.3.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4420/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual",
"I thought we were supposed to move gradually from `unittest` to `pytest`..."
] | 2022-05-30T12:13:18 | 2022-09-30T16:01:37 | 2022-09-30T16:01:37 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in π€ `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks π€
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4418/comments | https://api.github.com/repos/huggingface/datasets/issues/4418/events | https://github.com/huggingface/datasets/pull/4418 | 1,252,506,268 | PR_kwDODunzps44q9pG | 4,418 | Add dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-30T10:10:40 | 2022-05-30T14:58:18 | 2022-05-30T14:58:18 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4418/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4418",
"html_url": "https://github.com/huggingface/datasets/pull/4418",
"diff_url": "https://github.com/huggingface/datasets/pull/4418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4418.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4417/comments | https://api.github.com/repos/huggingface/datasets/issues/4417/events | https://github.com/huggingface/datasets/issues/4417 | 1,251,933,091 | I_kwDODunzps5Knvuj | 4,417 | how to convert a dict generator into a huggingface dataset. | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@albertvillanova @lhoestq , could you please help me on this issue. ",
"Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co/docs/datasets/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`",
"@lhoestq , hey i did as you instructed, but sadly i cannot get pass through the download_manager, as i dont have anything to download. i was skipping the ` def _split_generators(self, dl_manager):` function. but i cannot get around it. I get a `NotImplementedError: `\r\n\r\nthe following is my code for the same: \r\n\r\n\r\n\r\n```\r\nimport datasets \r\nimport functools\r\nimport glob \r\nfrom datasets import load_from_disk\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_dataset\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\[email protected]_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\n\r\n_CITATION = \"Not ready yet\"\r\n_DESCRIPTION = \"a custom seqio based mixed samples on a given temperature value, that again returns a dataset in HF dataset format well samples on the Mixture temperature\"\r\n_HOMEPAGE = \"ldcil.org\"\r\n\r\nclass CustomSeqio(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n homepage=\"https://ldcil.org\",\r\n citation=_CITATION,)\r\n\r\ndef generate_examples(self):\r\n seqio_train_list = []\r\n for lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n \r\n lang_name_list = []\r\n for lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\n seqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7)\r\n \r\n seqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\n for id, ex in enumerate(seqio_mixture_dataset):\r\n yield id, {\"text\": ex[\"targets\"].numpy().decode()}\r\n```\r\n\r\nand i load it by:\r\n\r\n`seqio_mixture = load_dataset(\"seqio_loader\")`",
"@lhoestq , just to make things clear ... \r\n\r\nthe following is my original code, thats not in the HF dataset loading script: \r\n\r\n```\r\nimport functools\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_from_disk\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\nimport glob \r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n\r\[email protected]_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\nseqio_train_list = []\r\n\r\nfor lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n\r\nlang_name_list = []\r\nfor lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\nseqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7\r\n)\r\n\r\nseqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\nfor _, ex in zip(range(15), seqio_mixture_dataset):\r\n print(ex[\"targets\"].numpy().decode())\r\n```\r\n\r\nwhere the seqio_mixture_dataset is the generator that i wanted to be wrapped in HF dataset. \r\n\r\nalso additionally, could you please tell me how do i set the `default_rate=0.7` args where `seqio_mixture` is defined to be made as a custom option in the HF load_dataset() method,\r\n\r\nmaybe like this: \r\n`seqio_mixture_dataset = datasets.load_dataset(\"seqio_loader\",temperature=0.5)`",
"I like the idea of having `Dataset.from_iterable(iterable)` in the API. The only problem is that we also want to make this part cachable, which is tricky if `iterable` is a generator. \r\n\r\nSome resources on this issue:\r\n* https://github.com/uqfoundation/dill/issues/311\r\n* https://stackoverflow.com/questions/7180212/why-cant-generators-be-pickled\r\n* https://github.com/tonyroberts/generator_tools - python package for pickling generators; pickles bytecode, so it creates version-specific dumps",
"For the caching maybe we can have `Dataset.from_generator` as TF and pickle+hash the generator function (not the generator object itself) ?\r\n\r\nAnd then keep `Dataset.from_iterable` fo pickable objects like lists",
"@lhoestq, @mariosasko do you too have any examples where the dataset is a generator and needs to be wrapped into hf dataset ? ",
"@lhoestq, following to my previous question ... what possibly could be done in this [link1](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627404) [link2](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627593) case? do you have any ideas? ",
"@lhoestq +1 for the `Dataset.from_generator` idea.\r\n\r\nHaving thought about it, let's avoid adding `Dataset.from_iterable` to the API since dictionaries are technically iteralbles (\"iterable\" is a broad term in Python), and we already provide `Dataset.from_dict`. And for lists maybe we can add `Dataset.from_list` similar to `pa.Table.from_pylist`. WDYT?\r\n",
"Hi @StephennFernandes!\r\n\r\nTo fix the issues in the copied code, rename `generate_examples` to` _generate_examples` and add one level of indentation as this is a method of `GeneratorBasedBuilder` and define `_split_generators` as follows (again as a method of `GeneratorBasedBuilder):\r\n```python\r\n def _split_generators(self, dl_manager):\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={},\r\n ),\r\n ]\r\n```\r\n\r\nAnd if you are feeling extra adventurous, you can try to use ArrowWriter to directly create a cache file:\r\n```python\r\nfrom datasets import Dataset\r\nfrom datasets.arrow_writer import ArrowWriter\r\n\r\nwriter = ArrowWriter(path=\"path/to/cache_file.arrow\", writer_batch_size=1000)\r\n\r\nwith writer:\r\n for ex in generator:\r\n writer.write(ex) \r\n writer.finalize()\r\n\r\ndset = Dataset.from_file(\"path/to/cache_file.arrow\")\r\n```\r\n\r\n",
"I have a problem which I think is very similar: I would like to \"stream\" data to a HF Array (memory-mapped) Dataset, where the final size of the dataset is unknown, but could be much larger than what fits into memory.\r\nWhat I want to end up with is an Array Dataset which I can open using `Dataset.load_from_disk(dataset_path=\"somename\")` and use e.g. as the training set. \r\n\r\nFor this I would have thought there should be an API which allows me to open/create the dataset (and define the features etc), then write examples to the dataset, but I could not find a way to do this. \r\n\r\nI tried doing this and it looks like it works, but it feels very hacky and I am not sure if this might fail to update some of the fields in the json files which may turn out to be important:\r\n```\r\nfrom datasets import Dataset, Features, ClassLabel, Sequence, Value\r\nfrom datasets.arrow_writer import ArrowWriter \r\n# 1) define the features\r\nfeatures = Features(dict(\r\n id=Value(dtype=\"string\"),\r\n tokens=Sequence(feature=Value(dtype=\"string\")),\r\n ner_tags=Sequence(feature=ClassLabel(names=['O', 'B-corporation', 'I-corporation', 'B-creative-work', 'I-creative-work', 'B-group', 'I-group', 'B-location', 'I-location', 'B-person', 'I-person', 'B-product', 'I-product'])),\r\n))\r\n# 2) create empty dataset for examples with these features and store to disk\r\nempty = dict(\r\n id = [],\r\n tokens = [],\r\n ner_tags = [],\r\n)\r\nds = Dataset.from_dict(empty, features=features)\r\nds.save_to_disk(dataset_path=\"debug_ds1\")\r\n\r\n# 3) directly write all the examples to the arrow dataset \r\nwith ArrowWriter(path=\"debug_ds1/dataset.arrow\") as writer: \r\n writer.write(dict(id=0, tokens=[\"a\", \"b\"], ner_tags=[0, 0])) \r\n writer.write(dict(id=1, tokens=[\"x\", \"y\"], ner_tags=[1, 0])) \r\n writer.finalize() \r\n \r\nds2 = Dataset.load_from_disk(dataset_path=\"debug_ds1\")\r\nlen(ds2)\r\n```\r\nIs there a cleaner/proper way to do this?\r\n\r\nI like the sound of `Dataset.from_iterable` or `Dataset.from_generator` (should not from iterable be able to handle from generator too as all generators are iterables?) but how would I define the features for me examples there? ",
"Hi @johann-petrak! You can pass the features directly to ArrowWriter's initializer like so `ArrowWriter(..., features=features)`.\r\n\r\nAnd the reason why I prefer `Dataset.from_generator` over `Dataset.from_iterable` is mentioned in one of my previous comments.",
"@mariosasko so at the moment we still have to create a fake `Dataset` first and then use `ArrowWriter` to write an actual dataset? I'm using the latest version of `datasets` on pypi but my final file is always empty. Is there anything wrong with the code below?\r\n\r\n```python\r\n total = 0\r\n with ArrowWriter(path=str(final_data_path), features=features) as writer:\r\n for batch in loader:\r\n for traj in batch:\r\n for generator in question_generators:\r\n for xi in generator(traj):\r\n # print(f\"Question: {xi.question}, answer: {xi.answer}\")\r\n total += 1\r\n writer.write(\r\n {\r\n \"id\": f\"qa_{total}\",\r\n \"question\": xi.question,\r\n \"answer\": xi.answer,\r\n }\r\n )\r\n writer.finalize()\r\n print(f\"Total #questions = {total}\") # this prints 402\r\n```",
"This works for me if I then (actually I also close the writer: `writer.close()`) open the Arrow file as a dataset using `ds=Dataset.from_file(final_data_path)` then `ds.save_to_disk(somedir)`. The Dataset created that way contains the expected examples.",
"Oh thanks. That did the trick I believe. Shouldn't ArrowWriter have a context manager that does these operations?",
"You can just use `Dataset.from_file` to get your dataset, no need to do an extra `save_to_disk` somewhere else ;)",
"I was thinking that `save_to_disk` is necessary when one wants to re-use that dataset as a proper HF dataset later, no?\r\nAt least what I wanted to achieve is create a dataset that can be opened like any other local or remote dataset. ",
"`save_to_disk`/`load_from_disk` is indeed more general, e.g. it supports datasets that consist in several files, and saves some extra info in a dataset_info.json file (description, citation, split sizes, etc.)\r\n\r\nIf you have one single file it's fine to simply do `.from_file()`"
] | 2022-05-29T16:28:27 | 2022-09-16T14:44:19 | 2022-09-16T14:44:19 | NONE | null | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[βtextβ])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4417/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4416/comments | https://api.github.com/repos/huggingface/datasets/issues/4416/events | https://github.com/huggingface/datasets/pull/4416 | 1,251,875,763 | PR_kwDODunzps44o7sF | 4,416 | Add LCCC dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)",
"@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes",
"Hi ! The dataset seems to be a duplicate of https://huggingface.co/datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)",
"> Hi ! The dataset seems to be a duplicate of https://huggingface.co/datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)\r\n\r\nOK, sorry for the inconvenience. I have closed another two PRs since these datasets are already available on huggingface.co",
"It's fine, thanks @silverriver for adding these datasets !"
] | 2022-05-29T12:27:19 | 2022-06-15T10:28:59 | 2022-06-02T09:13:46 | CONTRIBUTOR | null | Hi, I am trying to add a new dataset lccc.
All tests are passed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4416/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4416",
"html_url": "https://github.com/huggingface/datasets/pull/4416",
"diff_url": "https://github.com/huggingface/datasets/pull/4416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4416.patch",
"merged_at": "2022-06-02T09:13:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4415/comments | https://api.github.com/repos/huggingface/datasets/issues/4415/events | https://github.com/huggingface/datasets/pull/4415 | 1,251,002,981 | PR_kwDODunzps44mIJk | 4,415 | Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-27T17:03:42 | 2022-06-07T12:42:25 | 2022-06-07T12:33:52 | CONTRIBUTOR | null | Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error.
TODO:
~~- [ ] handle token + `{Audio, Image}.embed_storage`~~
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4415/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4415",
"html_url": "https://github.com/huggingface/datasets/pull/4415",
"diff_url": "https://github.com/huggingface/datasets/pull/4415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4415.patch",
"merged_at": "2022-06-07T12:33:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4414/comments | https://api.github.com/repos/huggingface/datasets/issues/4414/events | https://github.com/huggingface/datasets/pull/4414 | 1,250,546,888 | PR_kwDODunzps44klhY | 4,414 | Rename DatasetBuilder config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-27T09:28:02 | 2022-05-31T15:07:21 | 2022-05-31T14:58:51 | MEMBER | null | This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:
- it avoids confusion with the attribute `DatasetBuilder.name`, which is different
- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`
Other simpler possibility could be to rename it to just `config` instead.
Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.
It would have a major impact to rename it also in:
- load_dataset
- load_dataset_builder: although this could also be assumed as inners...
- in our CLI commands
Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):
```
load_dataset(dataset, config,...
```
instead of
```
load_dataset(path, name,...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4414/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4414",
"html_url": "https://github.com/huggingface/datasets/pull/4414",
"diff_url": "https://github.com/huggingface/datasets/pull/4414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4414.patch",
"merged_at": "2022-05-31T14:58:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"login": "dgcnz",
"id": 24966039,
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgcnz",
"html_url": "https://github.com/dgcnz",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint, without success.\r\n\r\nCC: @severo ",
"Fixed!\r\n\r\nhttps://huggingface.co/datasets/ett/viewer/h1/test\r\n\r\n<img width=\"982\" alt=\"Capture dβeΜcran 2022-06-15 aΜ 09 30 22\" src=\"https://user-images.githubusercontent.com/1676121/173769035-a075d753-ecfc-4a43-b54b-973105d464d3.png\">\r\n"
] | 2022-05-27T02:12:35 | 2022-06-15T07:30:46 | 2022-06-15T07:30:46 | NONE | null | ### Link
https://huggingface.co/datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4412/comments | https://api.github.com/repos/huggingface/datasets/issues/4412/events | https://github.com/huggingface/datasets/pull/4412 | 1,249,490,179 | PR_kwDODunzps44hFvq | 4,412 | Skip hidden files/directories in data files resolution and `iter_files` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR (via new release) broke many transformers tests.\r\n\r\nI will try to post a summary shortly.\r\n\r\ncc: @ydshieh ",
"So now it can't handle a local path via: `--train_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json` even though it's there. it works just fine if I change the path to not have `..`\r\n\r\nYou can reproduce the original problem with:\r\n\r\n```\r\n$ cd transformers \r\n$ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --validation_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmp5o5to4k0 --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 1 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix \"translate English to Romanian: \" --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3\r\n[...]\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 656, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 346, in main\r\n raw_datasets = load_dataset(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1097, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 743, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 588, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 556, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 194, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 144, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/mnt/nvme0/code/huggingface/transformers-master/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json' at /mnt/nvme0/code/huggingface/transformers-master\r\n```",
"will apply a workaround to `transformers` tests here https://github.com/huggingface/transformers/pull/17721\r\n",
"This has been fixed with https://github.com/huggingface/datasets/pull/4505, will do a patch release tomorrow for `datasets` ;)",
"Thank you for the quick fix, @lhoestq "
] | 2022-05-26T12:10:28 | 2022-06-15T17:11:25 | 2022-06-01T13:04:16 | CONTRIBUTOR | null | Fix #4115 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4412/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4412",
"html_url": "https://github.com/huggingface/datasets/pull/4412",
"diff_url": "https://github.com/huggingface/datasets/pull/4412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4412.patch",
"merged_at": "2022-06-01T13:04:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"π€ This PR closes https://github.com/huggingface/datasets/issues/4398",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code π© ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine π€ ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there π© ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.",
"Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:",
"@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that ππ» ",
"I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?",
"> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!",
"It worked indeed @lhoestq, thanks for the proposal and the review! π€ ",
"Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).",
"Sure @mariosasko I can prepare another PR to add those too, thanks! "
] | 2022-05-26T11:40:06 | 2022-06-14T19:05:37 | 2022-06-14T16:01:56 | CONTRIBUTOR | null | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"merged_at": "2022-06-14T16:01:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4410/comments | https://api.github.com/repos/huggingface/datasets/issues/4410/events | https://github.com/huggingface/datasets/pull/4410 | 1,249,148,457 | PR_kwDODunzps44f_Td | 4,410 | Remove Google Drive URL in spider dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-26T06:17:35 | 2022-05-26T06:48:42 | 2022-05-26T06:40:12 | MEMBER | null | The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
Fix #4401. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4410/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4410",
"html_url": "https://github.com/huggingface/datasets/pull/4410",
"diff_url": "https://github.com/huggingface/datasets/pull/4410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4410.patch",
"merged_at": "2022-05-26T06:40:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n",
"There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests/features/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests/features/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example",
"@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?",
"You can add your tests in this PR with the other changes you did",
"@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not",
"@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https://huggingface.co/api/datasets/lhoestq/_dummy?full=true) url is not found!\r\nPLZ, CHK!",
"@lhoestq\r\nhello????",
"@lhoestq \r\ntest_audio.py\r\nif we don`t use path in pcm, test-case need to be changed\r\nso, we check path just None",
"i'm merge branch already and `multiprocess` in `setup.py` but circleci error only win version\r\n![image](https://user-images.githubusercontent.com/34292279/175461714-c7d2e741-3b7b-40a3-bba9-13ce2af0055c.png)\r\nhow can i fixed it?",
"@lhoestq thx for comment!\r\ntest_audio.py test complete. it runs sucessfully\r\nand, self.get(\"sampling_rate\") -> value.get(\"sampling_rate\") changed\r\n\r\nand, some comment is not agreed to me, plz check my sub comment!",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-26T04:26:36 | 2022-07-07T13:27:29 | 2022-07-07T13:16:09 | CONTRIBUTOR | null | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"merged_at": "2022-07-07T13:16:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T20:32:19 | 2022-05-25T20:45:11 | 2022-05-25T20:36:47 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": "2022-05-25T20:36:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for conll2012_ontonotesv5 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ",
"I've just sent the forcing of the refresh of the preview to the new endpoint.",
"Fixed, thanks for the patience. The issue was the amount of RAM allowed to extract the first rows of the dataset was not sufficient."
] | 2022-05-25T20:18:33 | 2022-06-07T18:39:16 | 2022-06-07T18:39:16 | NONE | null | ### Link
https://huggingface.co/datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4406/comments | https://api.github.com/repos/huggingface/datasets/issues/4406/events | https://github.com/huggingface/datasets/pull/4406 | 1,248,626,622 | PR_kwDODunzps44ePLU | 4,406 | Improve language tag for PIAF dataset | {
"login": "lbourdois",
"id": 58078086,
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbourdois",
"html_url": "https://github.com/lbourdois",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-25T19:41:55 | 2022-05-27T14:51:23 | 2022-05-27T14:51:23 | NONE | null | Hi,
As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.
This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4406/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4406",
"html_url": "https://github.com/huggingface/datasets/pull/4406",
"diff_url": "https://github.com/huggingface/datasets/pull/4406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4406.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4405/comments | https://api.github.com/repos/huggingface/datasets/issues/4405/events | https://github.com/huggingface/datasets/issues/4405 | 1,248,574,087 | I_kwDODunzps5Ka7qH | 4,405 | [TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"
] | 2022-05-25T18:56:43 | 2022-06-07T14:27:20 | 2022-06-07T14:27:20 | NONE | null | ## Describe the bug
I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.
## Steps to reproduce the bug
```python
import os
from typing import (
List,
Dict,
)
from collections import (
defaultdict,
)
from dataclasses import (
dataclass,
)
from datasets import (
load_dataset,
)
@dataclass
class ConllConverter:
path: str
name: str
cache_dir: str
def __post_init__(
self,
):
self.dataset = load_dataset(
path=self.path,
name=self.name,
cache_dir=self.cache_dir,
)
def convert(
self,
):
class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature
# label_set = list(set([
# label.split("-")[1] if label != "O" else label for label in class_label.names
# ]))
def prepare_chunk(token, entity):
assert len(token) == len(entity)
# Sequence length
length = len(token)
# Variable used
entity_chunk = defaultdict(list)
idx = flag = 0
# While loop
while idx < length:
if entity[idx] == "O":
flag += 1
idx += 1
else:
iob_tp, lab_tp = entity[idx].split("-")
assert iob_tp == "B"
idx += 1
while idx < length and entity[idx].startswith("I-"):
idx += 1
entity_chunk[lab_tp].append(token[flag: idx])
flag = idx
entity_chunk = dict(entity_chunk)
# for label in label_set:
# if label != "O" and label not in entity_chunk.keys():
# entity_chunk[label] = None
return entity_chunk
def prepare_features(
batch: Dict[str, List],
) -> Dict[str, List]:
sentence = [
sent for doc_sent in batch["sentences"] for sent in doc_sent
]
feature = {
"sentence": list(),
}
for sent in sentence:
token = sent["words"]
entity = class_label.int2str(sent["named_entities"])
entity_chunk = prepare_chunk(token, entity)
sent_feat = {
"token": token,
"entity": entity,
"entity_chunk": entity_chunk,
}
feature["sentence"].append(sent_feat)
return feature
column_names = self.dataset.column_names["train"]
dataset = self.dataset.map(
function=prepare_features,
with_indices=False,
batched=True,
batch_size=3,
remove_columns=column_names,
num_proc=1,
)
dataset.save_to_disk(
dataset_dict_path=os.path.join("data", self.path, self.name)
)
if __name__ == "__main__":
converter = ConllConverter(
path="conll2012_ontonotesv5",
name="english_v4",
cache_dir="cache",
)
converter.convert()
```
## Expected results
I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.
## Actual results
<details>
<summary>Traceback</summary>
```python
Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module>
converter.convert()
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert
dataset = self.dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map
transformed_shards[index] = async_result.get()
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu 18.04
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4405/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op.",
"@mariosasko Can we make ._fingerprint not private? seems a critical component for tracking how a model was generated to ensure reproducibility."
] | 2022-05-25T18:56:08 | 2022-09-13T15:09:30 | 2022-06-16T10:47:53 | NONE | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T16:46:15 | 2022-05-31T08:33:36 | 2022-05-31T08:25:02 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"merged_at": "2022-05-31T08:25:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4402/comments | https://api.github.com/repos/huggingface/datasets/issues/4402/events | https://github.com/huggingface/datasets/pull/4402 | 1,248,078,067 | PR_kwDODunzps44cdR5 | 4,402 | Skip identical files in `push_to_hub` instead of overwriting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-25T13:12:51 | 2022-05-25T15:16:36 | 2022-05-25T15:08:03 | CONTRIBUTOR | null | Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.
To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet
```
to:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet
```
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4402/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4402",
"html_url": "https://github.com/huggingface/datasets/pull/4402",
"diff_url": "https://github.com/huggingface/datasets/pull/4402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4402.patch",
"merged_at": "2022-05-25T15:08:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4401/comments | https://api.github.com/repos/huggingface/datasets/issues/4401/events | https://github.com/huggingface/datasets/issues/4401 | 1,247,695,921 | I_kwDODunzps5KXlQx | 4,401 | "NonMatchingChecksumError" when importing 'spider' dataset | {
"login": "OmarAlaaeldein",
"id": 81417777,
"node_id": "MDQ6VXNlcjgxNDE3Nzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OmarAlaaeldein",
"html_url": "https://github.com/OmarAlaaeldein",
"followers_url": "https://api.github.com/users/OmarAlaaeldein/followers",
"following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}",
"gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions",
"organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs",
"repos_url": "https://api.github.com/users/OmarAlaaeldein/repos",
"events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}",
"received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.",
"We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 2022-05-25T07:45:07 | 2022-05-26T06:40:12 | 2022-05-26T06:40:12 | NONE | null | ## Describe the bug
When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('spider')
```
## Expected results
Dataset object
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Environment info
- `datasets` version: 2.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4400/comments | https://api.github.com/repos/huggingface/datasets/issues/4400/events | https://github.com/huggingface/datasets/issues/4400 | 1,247,404,237 | I_kwDODunzps5KWeDN | 4,400 | load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py. | {
"login": "cailun01",
"id": 20658907,
"node_id": "MDQ6VXNlcjIwNjU4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cailun01",
"html_url": "https://github.com/cailun01",
"followers_url": "https://api.github.com/users/cailun01/followers",
"following_url": "https://api.github.com/users/cailun01/following{/other_user}",
"gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cailun01/subscriptions",
"organizations_url": "https://api.github.com/users/cailun01/orgs",
"repos_url": "https://api.github.com/users/cailun01/repos",
"events_url": "https://api.github.com/users/cailun01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cailun01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I tried in this way.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(path=\"wikitext\", name=\"wikitext-103-v1\", split=\"train\")\r\n```"
] | 2022-05-25T03:10:44 | 2022-10-24T06:10:27 | 2022-05-25T03:26:36 | NONE | null | ## Describe the bug
Could not reach wikitext-2-raw-v1.py
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikitext-2-raw-v1")
```
## Expected results
Download `wikitext-2-raw-v1` dataset successfully.
## Actual results
```
File "load_datasets.py", line 13, in <module>
load_dataset("wikitext-2-raw-v1")
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset
**config_kwargs,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder
data_files=data_files,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory
dynamic_modules_path=dynamic_modules_path,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module
local_path = self.download_loading_script(revision)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path
download_desc=download_config.download_desc,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),))
```
I tried to download wikitext-2-raw-v1.py by chrome and got:
![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: CentOS 7
- Python version: 3.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4400/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4399/comments | https://api.github.com/repos/huggingface/datasets/issues/4399/events | https://github.com/huggingface/datasets/issues/4399 | 1,246,948,299 | I_kwDODunzps5KUuvL | 4,399 | LocalDatasetModuleFactoryWithoutScript extracts invalid builder name | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n",
"The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n```",
"I came through the same issue , just removing the last slash in the dataset path fixed it for me, may be this repo moderators could accept this as an accepted answer atleast if this could not be integrated\r\n\r\n> The fix is:\r\n> \r\n> ```\r\n> \"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n> ```\r\n\r\n@apohllo consider making a pull request on this \r\n\r\nThanks for the amazing contributions from huggingface people !!\r\n",
"@apohllo Would you be interested in submitting a PR with the fix?",
"@mariosasko here we go:\r\n\r\nhttps://github.com/huggingface/datasets/pull/4967\r\n\r\nTBH I haven't tested it yet, but should work, since this is a basic change."
] | 2022-05-24T18:03:01 | 2022-09-12T15:30:43 | 2022-09-12T15:30:43 | CONTRIBUTOR | null | ## Describe the bug
Trying to load a local dataset raises an error indicating that the config builder has to have a name.
No error should be reported, since the call is completly valid.
## Steps to reproduce the bug
```python
load_dataset("./data/some-dataset/", name="some-name")
```
## Expected results
The dataset should be loaded.
## Actual results
```
Traceback (most recent call last):
File "train_lquad.py", line 19, in <module>
load(tokenize_target_function, tokenize_target_function, {}, tokenizer)
File "train_lquad.py", line 14, in load
dataset = load_dataset("./data/lquad/", name="lquad")
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset
builder_instance = load_dataset_builder(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__
self.config, self.config_id = self._create_builder_config(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config
raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}")
ValueError: BuilderConfig must have a name, got
```
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
The error is probably in line 795 in load.py:
```
builder_kwargs = {
"hash": hash,
"data_files": data_files,
"name": os.path.basename(self.path),
"base_path": self.path,
**builder_kwargs,
}
```
`os.path.basename` for a directory returns an empty string, rather than the name of the directory.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4399/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4398/comments | https://api.github.com/repos/huggingface/datasets/issues/4398/events | https://github.com/huggingface/datasets/issues/4398 | 1,246,666,749 | I_kwDODunzps5KTp_9 | 4,398 | Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the issue, so the root cause of it may be around that. Sorry I cannot provide you more insights, in case I get to fix it I'll submit a PR, in the meanwhile the code that I'm using as a workaround is the following:\r\n\r\n```python\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\"))\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\nds.add_faiss_index(column=\"embeddings\")\r\n```",
"FYI the main reason I want to use `dataset.remove_columns` rather than the function inside `dataset.map` is because according to the π€ Datasets documentation, it's faster.\r\n\r\n\"π€ Datasets also has a [Dataset.remove_columns()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.remove_columns) method that is functionally identical, but faster, because it doesnβt copy the data of the remaining columns.\"\r\n\r\nMore information at https://huggingface.co/docs/datasets/process#map",
"Here I'm presenting all the scenarios so that you can further investigate the issue:\r\n\r\n- β
`cast_column` -> `map` with `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- β `cast_column` -> `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- β `cast_column` -> `map` with `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- β
`cast_column` -> `map` -> `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- β
`map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```",
"So on, I've created #4411 so as to fix the bug with `remove_columns` under certain conditions before `add_faiss_index`, which means that the scenarios not working above are already working fine."
] | 2022-05-24T14:41:34 | 2022-06-14T16:01:56 | 2022-06-14T16:01:56 | CONTRIBUTOR | null | First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue.
## Describe the bug
Calling a certain combination of operations over a π€ `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below!
## Steps to reproduce the bug
Assuming the following dataset named `sample.csv` with some IMDb data:
```csv
id,title,summary
1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement."
9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others."
11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder."
1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him."
```
We'll be able to reproduce the bug using the following piece of code:
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset, Value
ds = load_dataset("csv", data_files=["sample.csv"], split="train")
ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32`
ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])})
ds = ds.remove_columns(["title", "summary"])
def generate_embeddings(x):
return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()}
ds = ds.map(generate_embeddings)
ds = ds.remove_columns("inputs")
ds.add_faiss_index(column="embeddings") # It fails here!
```
The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example.
## Expected results
Ideally, the `faiss` index should be calculated over the π€ `Dataset` and no exception should be triggered.
## Actual results
But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4398/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4397/comments | https://api.github.com/repos/huggingface/datasets/issues/4397/events | https://github.com/huggingface/datasets/pull/4397 | 1,246,597,632 | PR_kwDODunzps44XcG3 | 4,397 | Fix dependency on dill version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-24T13:54:23 | 2022-10-26T08:45:37 | 2022-05-25T13:54:08 | MEMBER | null | We had to make a hotfix by pinning dill:
- #4380
because from version 0.3.5, our custom `save_function` pickling function was raising an exception:
- #4379
This PR fixes this by implementing our custom `save_function` depending on the version of dill.
CC: @anivegesana
This PR needs first being merged:
- [x] #4384
- so that a circular import is fixed
It is also convenient to merge first:
- [x] #4385 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4397/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4397",
"html_url": "https://github.com/huggingface/datasets/pull/4397",
"diff_url": "https://github.com/huggingface/datasets/pull/4397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4397.patch",
"merged_at": "2022-05-25T13:54:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4396/comments | https://api.github.com/repos/huggingface/datasets/issues/4396/events | https://github.com/huggingface/datasets/pull/4396 | 1,245,479,399 | PR_kwDODunzps44T0Di | 4,396 | Fix URL in gem dataset for totto config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T17:16:12 | 2022-05-24T05:49:11 | 2022-05-24T05:41:00 | MEMBER | null | As commented in:
- https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372
CC: @StevenTang1998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4396/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4396",
"html_url": "https://github.com/huggingface/datasets/pull/4396",
"diff_url": "https://github.com/huggingface/datasets/pull/4396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4396.patch",
"merged_at": "2022-05-24T05:40:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4395/comments | https://api.github.com/repos/huggingface/datasets/issues/4395/events | https://github.com/huggingface/datasets/pull/4395 | 1,245,436,486 | PR_kwDODunzps44TrBA | 4,395 | Add Pascal VOC dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some CI fails are unrelated to your PR and fixed on master, feel free to merge master into your branch :)",
"Thanks @nateraw for the addition of this dataset.\r\n\r\nI would suggest to transfer it to the Hugging Face Hub, under a \"pascal\" organization namespace: \"pascal/voc\".\r\n\r\nWhat do you think?",
"FYI I think this dataset is also available at (internal) https://huggingface.co/datasets/HuggingFaceM4/pascal_voc",
"@lhoestq @albertvillanova what do you think best path forward is? No idea when I'll get to looking at this again, but would be nice to know plan so when I find time I can just get it done in one sitting. ",
"My (not strong) opinion on this:\r\n- as we are removing dataset scripts from GitHub, this dataset should be created directly on the Hub\r\n- I proposed doing it under some kind of \"official\" org namespace, like pascal or pascal2; other suggestions are welcome\r\n- the link given by @lhoestq might serve as inspiration for your implementation (I think yours misses data about action classification): their implementation comprises tasks: classification/detection, segmentation, action classification, person layout; it misses other tasks though\r\n\r\nWhat do you think?"
] | 2022-05-23T16:34:05 | 2022-10-03T09:39:08 | 2022-10-03T09:36:56 | CONTRIBUTOR | null | This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4395/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4395",
"html_url": "https://github.com/huggingface/datasets/pull/4395",
"diff_url": "https://github.com/huggingface/datasets/pull/4395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4395.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4394/comments | https://api.github.com/repos/huggingface/datasets/issues/4394/events | https://github.com/huggingface/datasets/issues/4394 | 1,245,221,657 | I_kwDODunzps5KOJMZ | 4,394 | trainer became extremely slow after reload dataset by `load_from_disk` | {
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!",
"Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.",
"I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit.",
"Hi ! Currently `save_to_disk` saves one big Arrow file, which causes some slow downs. This has been discussed in #3735 and we'll implement sharding pretty soon to solve this\r\n\r\nFor now you can try splitting and saving your dataset in several Arrow files. Then you can load them one by one and use `concatenate_datasets` to have your big dataset again and hopefully with a better speed"
] | 2022-05-23T14:04:37 | 2022-06-06T16:08:01 | null | NONE | null | ## Describe the bug
Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.
Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**
## Steps to reproduce the bug
```python
tokenized_datasets.save_to_disk(
"/pathto/dataset"
)
tokenized_datasets = load_from_disk(
"/pathto/dataset"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"] if training_args.do_train else None,
eval_dataset=tokenized_datasets["validation"]
if training_args.do_eval
else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
## Expected results
Without the save and reload process, I only need about one day to run the whole script with one A100 card.
## Actual results
```
[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165
[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5
[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540
0%| | 1/567540 [00:09<1544:49:04, 9.80s/it]
0%| | 2/567540 [00:17<1320:00:17, 8.37s/it]
0%| | 3/567540 [00:26<1393:10:17, 8.84s/it]
0%| | 4/567540 [00:34<1344:56:33, 8.53s/it]
0%| | 5/567540 [00:43<1359:36:12, 8.62s/it]
```
## Environment info
```
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
datasets 2.2.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4394/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4393/comments | https://api.github.com/repos/huggingface/datasets/issues/4393/events | https://github.com/huggingface/datasets/pull/4393 | 1,244,876,662 | PR_kwDODunzps44RxWN | 4,393 | Update CI deprecated legacy image | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:35:42 | 2022-05-23T10:08:28 | 2022-05-23T09:59:55 | MEMBER | null | Now our CI still uses a deprecated legacy image:
> Youβre using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image.
This PR updates to next-generation convenience image.
Related to:
- #2955 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4393/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4393",
"html_url": "https://github.com/huggingface/datasets/pull/4393",
"diff_url": "https://github.com/huggingface/datasets/pull/4393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4393.patch",
"merged_at": "2022-05-23T09:59:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4392/comments | https://api.github.com/repos/huggingface/datasets/issues/4392/events | https://github.com/huggingface/datasets/pull/4392 | 1,244,859,971 | PR_kwDODunzps44RtsX | 4,392 | remove int documentation from logging docs | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:24:55 | 2022-05-23T15:16:55 | 2022-05-23T15:08:32 | MEMBER | null | Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4392/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4392",
"html_url": "https://github.com/huggingface/datasets/pull/4392",
"diff_url": "https://github.com/huggingface/datasets/pull/4392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4392.patch",
"merged_at": "2022-05-23T15:08:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4391/comments | https://api.github.com/repos/huggingface/datasets/issues/4391/events | https://github.com/huggingface/datasets/pull/4391 | 1,244,839,185 | PR_kwDODunzps44RpGv | 4,391 | Refactor column mappings for question answering datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)",
"Does that mean that we need to change the metadata?",
"> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)",
"Oh good! thanks for the heads up!"
] | 2022-05-23T09:13:14 | 2022-05-24T12:57:00 | 2022-05-24T12:48:48 | MEMBER | null | This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.
cc @sashavor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4391/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4391",
"html_url": "https://github.com/huggingface/datasets/pull/4391",
"diff_url": "https://github.com/huggingface/datasets/pull/4391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4391.patch",
"merged_at": "2022-05-24T12:48:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T09:11:20 | 2022-06-01T09:27:52 | 2022-06-01T09:19:25 | MEMBER | null | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"merged_at": "2022-06-01T09:19:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4389/comments | https://api.github.com/repos/huggingface/datasets/issues/4389/events | https://github.com/huggingface/datasets/pull/4389 | 1,244,693,690 | PR_kwDODunzps44RKMn | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T07:19:49 | 2022-05-23T10:38:26 | 2022-05-23T10:29:55 | MEMBER | null | This PR fixes some URLs.
Fix #4386. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4389/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"merged_at": "2022-05-23T10:29:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4388/comments | https://api.github.com/repos/huggingface/datasets/issues/4388/events | https://github.com/huggingface/datasets/pull/4388 | 1,244,645,158 | PR_kwDODunzps44RAG1 | 4,388 | Set builder name from module instead of class | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T06:26:35 | 2022-05-25T05:24:43 | 2022-05-25T05:16:15 | MEMBER | null | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4388/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"merged_at": "2022-05-25T05:16:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4387/comments | https://api.github.com/repos/huggingface/datasets/issues/4387/events | https://github.com/huggingface/datasets/issues/4387 | 1,244,147,817 | I_kwDODunzps5KKDBp | 4,387 | device/google/accessory/adk2012 - Git at Google | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-05-22T04:57:19 | 2022-05-23T06:36:27 | 2022-05-23T06:36:27 | NONE | null | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4387/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4386/comments | https://api.github.com/repos/huggingface/datasets/issues/4386/events | https://github.com/huggingface/datasets/issues/4386 | 1,243,965,532 | I_kwDODunzps5KJWhc | 4,386 | Bug for wiki_auto_asset_turk from GEM | {
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).",
"Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```",
"Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n",
"@StevenTang1998 fixed in:\r\n- #4396",
"Thanks!!"
] | 2022-05-21T12:31:30 | 2022-05-24T05:55:52 | 2022-05-23T10:29:55 | NONE | null | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare
self._download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators
dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download
downloaded_path_or_paths = map_nested(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested
mapped = [
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested
return function(data_struct)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path
output_path = get_from_cache(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4386/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4385/comments | https://api.github.com/repos/huggingface/datasets/issues/4385/events | https://github.com/huggingface/datasets/pull/4385 | 1,243,921,287 | PR_kwDODunzps44OwXF | 4,385 | Test dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.",
"Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_<filename>.py` for each `<filename>.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https://github.com/huggingface/datasets/issues/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.",
"Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)"
] | 2022-05-21T08:57:43 | 2022-05-25T08:30:13 | 2022-05-25T08:21:48 | MEMBER | null | Regression test for future releases of `dill`.
Related to #4379. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4385/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4385",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"merged_at": "2022-05-25T08:21:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4384/comments | https://api.github.com/repos/huggingface/datasets/issues/4384/events | https://github.com/huggingface/datasets/pull/4384 | 1,243,919,748 | PR_kwDODunzps44OwFr | 4,384 | Refactor download | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?",
"The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING",
"> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. "
] | 2022-05-21T08:49:24 | 2022-05-25T10:52:02 | 2022-05-25T10:43:43 | MEMBER | null | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities
- abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower
- architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture
Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.
As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860
- After an extension, a circular import is found
- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:
```
ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'.
tests/conftest.py:12: in <module>
import datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>
from .arrow_dataset import Dataset, concatenate_datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module>
from . import config
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module>
from .utils.logging import get_logger
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module>
from .download_manager import DownloadConfig, DownloadManager, DownloadMode
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module>
from .py_utils import NestedDataStructure, map_nested, size_str
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module>
if config.DILL_VERSION < version.parse("0.3.5"):
E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'
```
Imports:
- datasets
- Dataset: lower level than datasets
- config: lower level than Dataset
- logger: lower level than config
- DownloadManager: !!! HIGHER level of abstraction than logger!!
Why when importing logger we require importing DownloadManager?!?
- Logically, it does not make sense
- This is due to an error in the design/architecture of our library:
- To import the logger, we need to import it from `.utils.logging`
- To import `.utils.logging` we need to import `.utils`
- The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!
When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4384/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4384",
"html_url": "https://github.com/huggingface/datasets/pull/4384",
"diff_url": "https://github.com/huggingface/datasets/pull/4384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4384.patch",
"merged_at": "2022-05-25T10:43:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4383/comments | https://api.github.com/repos/huggingface/datasets/issues/4383/events | https://github.com/huggingface/datasets/issues/4383 | 1,243,856,981 | I_kwDODunzps5KI8BV | 4,383 | L | {
"login": "AronCodes21",
"id": 99847861,
"node_id": "U_kgDOBfOOtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AronCodes21",
"html_url": "https://github.com/AronCodes21",
"followers_url": "https://api.github.com/users/AronCodes21/followers",
"following_url": "https://api.github.com/users/AronCodes21/following{/other_user}",
"gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions",
"organizations_url": "https://api.github.com/users/AronCodes21/orgs",
"repos_url": "https://api.github.com/users/AronCodes21/repos",
"events_url": "https://api.github.com/users/AronCodes21/events{/privacy}",
"received_events_url": "https://api.github.com/users/AronCodes21/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 2022-05-21T03:47:58 | 2022-05-21T19:20:13 | 2022-05-21T19:20:13 | NONE | null | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4383/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4382/comments | https://api.github.com/repos/huggingface/datasets/issues/4382/events | https://github.com/huggingface/datasets/issues/4382 | 1,243,839,783 | I_kwDODunzps5KI30n | 4,382 | First time trying | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 2022-05-21T02:15:18 | 2022-05-21T19:20:44 | 2022-05-21T19:20:44 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4382/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4381/comments | https://api.github.com/repos/huggingface/datasets/issues/4381/events | https://github.com/huggingface/datasets/issues/4381 | 1,243,478,863 | I_kwDODunzps5KHftP | 4,381 | Bug in caching 2 datasets both with the same builder class name | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`",
"Hi @NouamaneTazi, please note that after our fix:\r\n- #4388\r\n\r\nwe do not consider the class name anymore, but the name of the file where the loading builder class is implemented. "
] | 2022-05-20T18:18:03 | 2022-06-02T08:18:37 | 2022-05-25T05:16:15 | MEMBER | null | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("mteb/mtop_intent", "en")
print(dataset['train'][0])
dataset = datasets.load_dataset("mteb/mtop_domain", "en")
print(dataset['train'][0])
```
## Expected results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}
```
## Actual results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1
- Platform: macOS-12.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4381/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4380/comments | https://api.github.com/repos/huggingface/datasets/issues/4380/events | https://github.com/huggingface/datasets/pull/4380 | 1,243,183,054 | PR_kwDODunzps44MUz0 | 4,380 | Pin dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T13:54:19 | 2022-06-13T10:03:52 | 2022-05-20T16:33:04 | MEMBER | null | Hotfix #4379.
CC: @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4380/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4380",
"html_url": "https://github.com/huggingface/datasets/pull/4380",
"diff_url": "https://github.com/huggingface/datasets/pull/4380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4380.patch",
"merged_at": "2022-05-20T16:33:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4379/comments | https://api.github.com/repos/huggingface/datasets/issues/4379/events | https://github.com/huggingface/datasets/issues/4379 | 1,243,175,854 | I_kwDODunzps5KGVuu | 4,379 | Latest dill release raises exception | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed by:\r\n- #4380 ",
"Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'=<function <lambda> at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```",
"@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?",
"Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.",
"Hi @anivegesana, that test is in `transformers` library:\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L449\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py ",
"@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps://github.com/huggingface/datasets/blob/95193ae61e92aa537d0c65d37a1fd9d2393aae89/src/datasets/utils/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).",
"Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ",
"I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation/dill#19"
] | 2022-05-20T13:48:36 | 2022-05-21T15:53:26 | 2022-05-20T17:06:27 | MEMBER | null | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
> raise self._value
E TypeError: '>' not supported between instances of 'NoneType' and 'float'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4379/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4378/comments | https://api.github.com/repos/huggingface/datasets/issues/4378/events | https://github.com/huggingface/datasets/pull/4378 | 1,242,935,373 | PR_kwDODunzps44Lf2R | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"& thank you!"
] | 2022-05-20T10:16:12 | 2022-05-24T13:50:23 | 2022-05-24T13:10:27 | CONTRIBUTOR | null | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4378/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4378",
"html_url": "https://github.com/huggingface/datasets/pull/4378",
"diff_url": "https://github.com/huggingface/datasets/pull/4378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4378.patch",
"merged_at": "2022-05-24T13:10:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4377/comments | https://api.github.com/repos/huggingface/datasets/issues/4377/events | https://github.com/huggingface/datasets/pull/4377 | 1,242,746,186 | PR_kwDODunzps44K4OY | 4,377 | Fix checksum and bug in irc_disentangle dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T07:29:28 | 2022-05-20T09:34:36 | 2022-05-20T09:26:32 | MEMBER | null | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4377/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4377",
"html_url": "https://github.com/huggingface/datasets/pull/4377",
"diff_url": "https://github.com/huggingface/datasets/pull/4377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4377.patch",
"merged_at": "2022-05-20T09:26:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4376/comments | https://api.github.com/repos/huggingface/datasets/issues/4376/events | https://github.com/huggingface/datasets/issues/4376 | 1,242,218,144 | I_kwDODunzps5KCr6g | 4,376 | irc_disentagle viewer error | {
"login": "labouz",
"id": 25671683,
"node_id": "MDQ6VXNlcjI1NjcxNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/labouz",
"html_url": "https://github.com/labouz",
"followers_url": "https://api.github.com/users/labouz/followers",
"following_url": "https://api.github.com/users/labouz/following{/other_user}",
"gists_url": "https://api.github.com/users/labouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/labouz/subscriptions",
"organizations_url": "https://api.github.com/users/labouz/orgs",
"repos_url": "https://api.github.com/users/labouz/repos",
"events_url": "https://api.github.com/users/labouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/labouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|ββββββββββ| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advancedπ€",
"Thanks for reporting, @labouz. I'm addressing it. ",
"The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.",
"parfait!\r\nit works now, thank you π ",
"Hi there, \r\nI see this issue is closed, but I am wondering if there is any chance the source files have been moved since this fix? I am stumbling into the same NonMatchingChecksumError noted by lebouz's second post once 118MB of data has been downloaded, and have tried the solutions noted in the various fix checksum posts linked here and in other posts regarding passing in \"reuse_cache_if_exists\" to download_mode. Any suggestions? Thank you!\r\n\r\n"
] | 2022-05-19T19:15:16 | 2023-01-12T16:56:13 | 2022-06-02T08:20:00 | NONE | null | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4376/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4375/comments | https://api.github.com/repos/huggingface/datasets/issues/4375/events | https://github.com/huggingface/datasets/pull/4375 | 1,241,921,147 | PR_kwDODunzps44IMCS | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is finally ready for review ! It's quite long I'm sorry, but it's not easy to disentangle everything ^^'\r\n\r\nThe main additions are in\r\n- src/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py\r\n- src/datasets/iterable_dataset.py\r\n- src/datasets/utils/patching.py",
"Added some comments and an error when lists have different lengths for sharding :)",
"Let's resolve the merge conflict and the CI error (if it's related to the changes), and I can review the PR again.",
"Feel free to review again :) The CI fail is unrelated to this PR and will be fixed by https://github.com/huggingface/datasets/pull/4472 (the hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos)",
"CI failures are unrelated to this PR - merging :)\r\n\r\n(CI fails are a mix of pip install fails and Hub fails)",
"@lhoestq you're our hero :)"
] | 2022-05-19T15:00:31 | 2022-07-04T16:05:14 | 2022-06-10T20:47:27 | MEMBER | null | ### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingface/datasets/issues/3951
- `fsspec` doesn't work out of the box in subprocesses
### Solution in this PR
I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`.
I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method
I also had to do a few changes the patching that enable streaming in dataset scripts:
- the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated
- I improved it to also check for renamed modules or attributes (ex: pandas vs pd)
- I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming
- I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing
### A few details regarding `fsspec` in multiprocessing
From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 :
> Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test!
> If any async instance has been created, the newly forked processes must:
> 1. discard references to locks, threads and event loops and make new ones
> 2. not use any async fsspec instances from the parent process
> 3. clear all class instance caches
Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process.
Fix https://github.com/huggingface/datasets/issues/3950
Fix https://github.com/huggingface/datasets/issues/3951
TODO:
- [x] fix tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4375/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4375",
"html_url": "https://github.com/huggingface/datasets/pull/4375",
"diff_url": "https://github.com/huggingface/datasets/pull/4375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4375.patch",
"merged_at": "2022-06-10T20:47:26"
} | true |