url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4829/comments
https://api.github.com/repos/huggingface/datasets/issues/4829/events
https://github.com/huggingface/datasets/issues/4829
1,336,068,068
I_kwDODunzps5Posfk
4,829
Misalignment between card tag validation and docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "(Note that the doc is aligned with the hub validation rules, and the \"ground truth\" is the hub validation rules given that they apply to all datasets, not just the canonical ones)", "Instead of our own implementation, we now use `huggingface_hub`'s `DatasetCardData`, which has the correct type hint, so I think we can close this issue." ]
"2022-08-11T14:44:45Z"
"2023-07-21T15:38:02Z"
null
MEMBER
null
null
null
## Describe the bug As pointed out in other issue: https://github.com/huggingface/datasets/pull/4827#discussion_r943536284 the validation of the dataset card tags is not aligned with its documentation: e.g. - implementation: `license: List[str]` - docs: `license: Union[str, List[str]]` They should be aligned. CC: @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4829/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/41
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/41/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/41/comments
https://api.github.com/repos/huggingface/datasets/issues/41/events
https://github.com/huggingface/datasets/pull/41
611,739,219
MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy
41
[Load module] allow kwargs into load module
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-04T09:42:11Z"
"2020-05-04T19:39:07Z"
"2020-05-04T19:39:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/41.diff", "html_url": "https://github.com/huggingface/datasets/pull/41", "merged_at": "2020-05-04T19:39:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/41.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/41" }
Currenly it is not possible to force a re-download of the dataset script. This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/41/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/41/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3577/comments
https://api.github.com/repos/huggingface/datasets/issues/3577/events
https://github.com/huggingface/datasets/issues/3577
1,102,598,241
I_kwDODunzps5BuFBh
3,577
Add The Mexican Emotional Speech Database (MESD)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
open
false
null
[]
null
[]
"2022-01-13T23:49:36Z"
"2022-01-27T14:14:38Z"
null
NONE
null
null
null
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)* - **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)* - **Motivation:** *Would add Spanish speech data to the HF datasets :) * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3577/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/642/comments
https://api.github.com/repos/huggingface/datasets/issues/642/events
https://github.com/huggingface/datasets/pull/642
704,397,499
MDExOlB1bGxSZXF1ZXN0NDg5MzMwMDAx
642
Rename wnut fields
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-18T13:51:31Z"
"2020-09-18T17:18:31Z"
"2020-09-18T17:18:30Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/642.diff", "html_url": "https://github.com/huggingface/datasets/pull/642", "merged_at": "2020-09-18T17:18:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/642.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/642" }
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/642/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4778/comments
https://api.github.com/repos/huggingface/datasets/issues/4778/events
https://github.com/huggingface/datasets/pull/4778
1,324,928,750
PR_kwDODunzps48dRPh
4,778
Update local loading script docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4778). All of your documentation changes will be reflected on that endpoint.", "I would rather have a section in the docs that explains how to modify the script of an existing dataset (`inspect_dataset` + modification + `load_dataset`) instead of focusing on the GH datasets bundled with the source (only applicable for devs).", "Good idea! I went with @mariosasko's suggestion to use `inspect_dataset` instead of cloning a dataset repository since it's a good opportunity to show off more of the library's lesser-known functions if that's ok with everyone :)", "One advantage of cloning the repo is that it fetches potential data files referenced inside a script using relative paths, so if we decide to use `inspect_dataset`, we should at least add a tip to explain this limitation and how to circumvent it.", "Oh you're right. Calling `load_dataset` on the modified script without having the files that come with it is not ideal. I agree it should be `git clone` instead - and inspect is for inspection only ^^'" ]
"2022-08-01T20:21:07Z"
"2022-08-23T16:32:26Z"
"2022-08-23T16:32:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4778.diff", "html_url": "https://github.com/huggingface/datasets/pull/4778", "merged_at": "2022-08-23T16:32:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/4778.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4778" }
This PR clarifies the local loading script section to include how to load a dataset after you've modified the local loading script (closes #4732).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4778/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4778/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5969/comments
https://api.github.com/repos/huggingface/datasets/issues/5969/events
https://github.com/huggingface/datasets/pull/5969
1,765,529,905
PR_kwDODunzps5Tcgq4
5,969
Add `encoding` and `errors` params to JSON loader
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006770 / 0.011353 (-0.004583) | 0.004143 / 0.011008 (-0.006865) | 0.098928 / 0.038508 (0.060420) | 0.044893 / 0.023109 (0.021783) | 0.302630 / 0.275898 (0.026732) | 0.368173 / 0.323480 (0.044693) | 0.005631 / 0.007986 (-0.002354) | 0.003397 / 0.004328 (-0.000931) | 0.075748 / 0.004250 (0.071497) | 0.062582 / 0.037052 (0.025530) | 0.329586 / 0.258489 (0.071097) | 0.362625 / 0.293841 (0.068784) | 0.033250 / 0.128546 (-0.095296) | 0.008880 / 0.075646 (-0.066766) | 0.329683 / 0.419271 (-0.089588) | 0.054426 / 0.043533 (0.010893) | 0.297940 / 0.255139 (0.042801) | 0.319796 / 0.283200 (0.036597) | 0.023296 / 0.141683 (-0.118387) | 1.462142 / 1.452155 (0.009987) | 1.495796 / 1.492716 (0.003079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201771 / 0.018006 (0.183765) | 0.454514 / 0.000490 (0.454024) | 0.003333 / 0.000200 (0.003133) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028084 / 0.037411 (-0.009327) | 0.109452 / 0.014526 (0.094926) | 0.119200 / 0.176557 (-0.057357) | 0.180302 / 0.737135 (-0.556834) | 0.125653 / 0.296338 (-0.170686) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409819 / 0.215209 (0.194610) | 4.055117 / 2.077655 (1.977462) | 1.855279 / 1.504120 (0.351159) | 1.655281 / 1.541195 (0.114086) | 1.687938 / 1.468490 (0.219448) | 0.528352 / 4.584777 (-4.056425) | 3.750250 / 3.745712 (0.004538) | 3.386741 / 5.269862 (-1.883121) | 1.572036 / 4.565676 (-2.993640) | 0.065125 / 0.424275 (-0.359150) | 0.011259 / 0.007607 (0.003652) | 0.513449 / 0.226044 (0.287405) | 5.139421 / 2.268929 (2.870492) | 2.316973 / 55.444624 (-53.127651) | 1.984109 / 6.876477 (-4.892368) | 2.127915 / 2.142072 (-0.014158) | 0.653238 / 4.805227 (-4.151989) | 0.142686 / 6.500664 (-6.357978) | 0.063666 / 0.075469 (-0.011803) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.185174 / 1.841788 (-0.656614) | 14.790282 / 8.074308 (6.715974) | 13.089222 / 10.191392 (2.897830) | 0.146055 / 0.680424 (-0.534369) | 0.017835 / 0.534201 (-0.516366) | 0.399598 / 0.579283 (-0.179685) | 0.425296 / 0.434364 (-0.009068) | 0.478552 / 0.540337 (-0.061786) | 0.579702 / 1.386936 (-0.807234) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004156 / 0.011008 (-0.006853) | 0.074948 / 0.038508 (0.036440) | 0.043368 / 0.023109 (0.020259) | 0.355389 / 0.275898 (0.079491) | 0.429167 / 0.323480 (0.105687) | 0.003911 / 0.007986 (-0.004075) | 0.004340 / 0.004328 (0.000012) | 0.075940 / 0.004250 (0.071689) | 0.054293 / 0.037052 (0.017241) | 0.400317 / 0.258489 (0.141827) | 0.432001 / 0.293841 (0.138160) | 0.032340 / 0.128546 (-0.096206) | 0.008876 / 0.075646 (-0.066770) | 0.082284 / 0.419271 (-0.336987) | 0.050819 / 0.043533 (0.007286) | 0.351994 / 0.255139 (0.096855) | 0.375917 / 0.283200 (0.092717) | 0.022466 / 0.141683 (-0.119217) | 1.538824 / 1.452155 (0.086669) | 1.563995 / 1.492716 (0.071279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227330 / 0.018006 (0.209323) | 0.446380 / 0.000490 (0.445890) | 0.000408 / 0.000200 (0.000208) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028534 / 0.037411 (-0.008878) | 0.113467 / 0.014526 (0.098941) | 0.123590 / 0.176557 (-0.052966) | 0.174309 / 0.737135 (-0.562827) | 0.130631 / 0.296338 (-0.165707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441020 / 0.215209 (0.225811) | 4.386564 / 2.077655 (2.308909) | 2.100704 / 1.504120 (0.596584) | 1.901484 / 1.541195 (0.360289) | 1.963494 / 1.468490 (0.495004) | 0.536838 / 4.584777 (-4.047939) | 3.739071 / 3.745712 (-0.006642) | 3.278981 / 5.269862 (-1.990881) | 1.515476 / 4.565676 (-3.050201) | 0.066388 / 0.424275 (-0.357887) | 0.011857 / 0.007607 (0.004250) | 0.545507 / 0.226044 (0.319463) | 5.441479 / 2.268929 (3.172550) | 2.602144 / 55.444624 (-52.842480) | 2.235583 / 6.876477 (-4.640894) | 2.293458 / 2.142072 (0.151385) | 0.658535 / 4.805227 (-4.146692) | 0.141327 / 6.500664 (-6.359337) | 0.063726 / 0.075469 (-0.011743) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247819 / 1.841788 (-0.593968) | 15.234524 / 8.074308 (7.160216) | 14.592700 / 10.191392 (4.401308) | 0.141952 / 0.680424 (-0.538472) | 0.017747 / 0.534201 (-0.516454) | 0.396819 / 0.579283 (-0.182465) | 0.415902 / 0.434364 (-0.018462) | 0.464619 / 0.540337 (-0.075718) | 0.560866 / 1.386936 (-0.826070) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4b7f6c59deb868e21f295917548fa2df10dd0158 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008278 / 0.011353 (-0.003075) | 0.005044 / 0.011008 (-0.005964) | 0.123382 / 0.038508 (0.084874) | 0.054039 / 0.023109 (0.030929) | 0.382338 / 0.275898 (0.106440) | 0.453287 / 0.323480 (0.129807) | 0.006342 / 0.007986 (-0.001644) | 0.003930 / 0.004328 (-0.000398) | 0.094039 / 0.004250 (0.089789) | 0.076525 / 0.037052 (0.039472) | 0.394066 / 0.258489 (0.135577) | 0.445600 / 0.293841 (0.151759) | 0.039348 / 0.128546 (-0.089199) | 0.010485 / 0.075646 (-0.065161) | 0.433730 / 0.419271 (0.014459) | 0.082671 / 0.043533 (0.039138) | 0.375250 / 0.255139 (0.120111) | 0.416269 / 0.283200 (0.133070) | 0.038397 / 0.141683 (-0.103286) | 1.864834 / 1.452155 (0.412680) | 2.010453 / 1.492716 (0.517737) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240008 / 0.018006 (0.222002) | 0.470975 / 0.000490 (0.470485) | 0.004001 / 0.000200 (0.003801) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031107 / 0.037411 (-0.006304) | 0.129371 / 0.014526 (0.114846) | 0.141559 / 0.176557 (-0.034997) | 0.205571 / 0.737135 (-0.531564) | 0.144611 / 0.296338 (-0.151728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506972 / 0.215209 (0.291763) | 5.055951 / 2.077655 (2.978296) | 2.397438 / 1.504120 (0.893318) | 2.170435 / 1.541195 (0.629240) | 2.240296 / 1.468490 (0.771806) | 0.641559 / 4.584777 (-3.943218) | 4.644772 / 3.745712 (0.899060) | 4.064200 / 5.269862 (-1.205662) | 1.946991 / 4.565676 (-2.618685) | 0.086413 / 0.424275 (-0.337862) | 0.015082 / 0.007607 (0.007475) | 0.670413 / 0.226044 (0.444369) | 6.331346 / 2.268929 (4.062418) | 2.965813 / 55.444624 (-52.478812) | 2.547952 / 6.876477 (-4.328524) | 2.718390 / 2.142072 (0.576318) | 0.796657 / 4.805227 (-4.008571) | 0.173229 / 6.500664 (-6.327435) | 0.079606 / 0.075469 (0.004137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568761 / 1.841788 (-0.273026) | 18.485432 / 8.074308 (10.411124) | 15.758513 / 10.191392 (5.567121) | 0.170427 / 0.680424 (-0.509997) | 0.021421 / 0.534201 (-0.512780) | 0.518623 / 0.579283 (-0.060660) | 0.525887 / 0.434364 (0.091523) | 0.640331 / 0.540337 (0.099993) | 0.766748 / 1.386936 (-0.620188) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007680 / 0.011353 (-0.003673) | 0.005289 / 0.011008 (-0.005719) | 0.093773 / 0.038508 (0.055265) | 0.054997 / 0.023109 (0.031888) | 0.456277 / 0.275898 (0.180379) | 0.500642 / 0.323480 (0.177162) | 0.005935 / 0.007986 (-0.002050) | 0.004375 / 0.004328 (0.000047) | 0.094131 / 0.004250 (0.089881) | 0.063399 / 0.037052 (0.026347) | 0.470546 / 0.258489 (0.212057) | 0.504989 / 0.293841 (0.211148) | 0.038541 / 0.128546 (-0.090006) | 0.010403 / 0.075646 (-0.065244) | 0.102469 / 0.419271 (-0.316802) | 0.063105 / 0.043533 (0.019572) | 0.466005 / 0.255139 (0.210866) | 0.458677 / 0.283200 (0.175477) | 0.028407 / 0.141683 (-0.113276) | 1.893829 / 1.452155 (0.441675) | 1.917954 / 1.492716 (0.425238) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272760 / 0.018006 (0.254754) | 0.476159 / 0.000490 (0.475669) | 0.008467 / 0.000200 (0.008267) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035755 / 0.037411 (-0.001656) | 0.145038 / 0.014526 (0.130512) | 0.148322 / 0.176557 (-0.028235) | 0.210193 / 0.737135 (-0.526943) | 0.156547 / 0.296338 (-0.139792) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.541204 / 0.215209 (0.325995) | 5.382746 / 2.077655 (3.305091) | 2.704229 / 1.504120 (1.200109) | 2.468422 / 1.541195 (0.927227) | 2.522672 / 1.468490 (1.054182) | 0.644899 / 4.584777 (-3.939878) | 4.654401 / 3.745712 (0.908689) | 2.159223 / 5.269862 (-3.110638) | 1.280098 / 4.565676 (-3.285578) | 0.080053 / 0.424275 (-0.344222) | 0.014383 / 0.007607 (0.006776) | 0.662770 / 0.226044 (0.436725) | 6.617651 / 2.268929 (4.348722) | 3.234347 / 55.444624 (-52.210277) | 2.861417 / 6.876477 (-4.015059) | 2.888928 / 2.142072 (0.746856) | 0.792854 / 4.805227 (-4.012374) | 0.172553 / 6.500664 (-6.328111) | 0.078402 / 0.075469 (0.002933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565351 / 1.841788 (-0.276436) | 18.681916 / 8.074308 (10.607608) | 17.264473 / 10.191392 (7.073081) | 0.168461 / 0.680424 (-0.511963) | 0.021353 / 0.534201 (-0.512848) | 0.517843 / 0.579283 (-0.061440) | 0.519907 / 0.434364 (0.085543) | 0.623687 / 0.540337 (0.083350) | 0.761796 / 1.386936 (-0.625140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bbf58747f734a46e75937bdbcbc05b06ade0224a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006750 / 0.011353 (-0.004603) | 0.004268 / 0.011008 (-0.006741) | 0.098644 / 0.038508 (0.060136) | 0.044643 / 0.023109 (0.021534) | 0.309420 / 0.275898 (0.033522) | 0.379294 / 0.323480 (0.055815) | 0.005729 / 0.007986 (-0.002256) | 0.003615 / 0.004328 (-0.000714) | 0.076086 / 0.004250 (0.071835) | 0.068994 / 0.037052 (0.031942) | 0.325653 / 0.258489 (0.067164) | 0.375187 / 0.293841 (0.081347) | 0.032546 / 0.128546 (-0.096000) | 0.009089 / 0.075646 (-0.066557) | 0.329905 / 0.419271 (-0.089366) | 0.066832 / 0.043533 (0.023300) | 0.299247 / 0.255139 (0.044108) | 0.323460 / 0.283200 (0.040260) | 0.034226 / 0.141683 (-0.107457) | 1.475659 / 1.452155 (0.023505) | 1.556234 / 1.492716 (0.063518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292305 / 0.018006 (0.274299) | 0.542584 / 0.000490 (0.542094) | 0.003047 / 0.000200 (0.002847) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030096 / 0.037411 (-0.007315) | 0.112341 / 0.014526 (0.097815) | 0.124965 / 0.176557 (-0.051591) | 0.183159 / 0.737135 (-0.553976) | 0.131885 / 0.296338 (-0.164453) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426437 / 0.215209 (0.211228) | 4.260984 / 2.077655 (2.183330) | 2.078358 / 1.504120 (0.574238) | 1.877644 / 1.541195 (0.336449) | 2.044036 / 1.468490 (0.575546) | 0.532980 / 4.584777 (-4.051797) | 3.749573 / 3.745712 (0.003860) | 1.944155 / 5.269862 (-3.325706) | 1.090307 / 4.565676 (-3.475370) | 0.065445 / 0.424275 (-0.358830) | 0.011237 / 0.007607 (0.003630) | 0.521448 / 0.226044 (0.295403) | 5.213118 / 2.268929 (2.944189) | 2.507829 / 55.444624 (-52.936795) | 2.177179 / 6.876477 (-4.699297) | 2.351161 / 2.142072 (0.209088) | 0.656775 / 4.805227 (-4.148452) | 0.141207 / 6.500664 (-6.359457) | 0.063286 / 0.075469 (-0.012183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190281 / 1.841788 (-0.651506) | 15.327424 / 8.074308 (7.253116) | 13.300695 / 10.191392 (3.109303) | 0.190484 / 0.680424 (-0.489939) | 0.017984 / 0.534201 (-0.516217) | 0.405714 / 0.579283 (-0.173569) | 0.435915 / 0.434364 (0.001551) | 0.494083 / 0.540337 (-0.046254) | 0.600616 / 1.386936 (-0.786320) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004289 / 0.011008 (-0.006719) | 0.076532 / 0.038508 (0.038024) | 0.043305 / 0.023109 (0.020196) | 0.356111 / 0.275898 (0.080213) | 0.434121 / 0.323480 (0.110641) | 0.005599 / 0.007986 (-0.002387) | 0.003461 / 0.004328 (-0.000868) | 0.077097 / 0.004250 (0.072847) | 0.055369 / 0.037052 (0.018317) | 0.367093 / 0.258489 (0.108604) | 0.418801 / 0.293841 (0.124960) | 0.032057 / 0.128546 (-0.096489) | 0.009048 / 0.075646 (-0.066599) | 0.082897 / 0.419271 (-0.336374) | 0.050287 / 0.043533 (0.006754) | 0.352060 / 0.255139 (0.096921) | 0.376278 / 0.283200 (0.093078) | 0.023924 / 0.141683 (-0.117759) | 1.522780 / 1.452155 (0.070626) | 1.578938 / 1.492716 (0.086222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287317 / 0.018006 (0.269311) | 0.508490 / 0.000490 (0.508000) | 0.000431 / 0.000200 (0.000231) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031139 / 0.037411 (-0.006272) | 0.113927 / 0.014526 (0.099401) | 0.128147 / 0.176557 (-0.048409) | 0.179712 / 0.737135 (-0.557424) | 0.134364 / 0.296338 (-0.161975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452834 / 0.215209 (0.237625) | 4.507944 / 2.077655 (2.430289) | 2.287758 / 1.504120 (0.783638) | 2.091145 / 1.541195 (0.549951) | 2.196228 / 1.468490 (0.727738) | 0.539306 / 4.584777 (-4.045471) | 3.838941 / 3.745712 (0.093228) | 1.908801 / 5.269862 (-3.361060) | 1.139235 / 4.565676 (-3.426442) | 0.066677 / 0.424275 (-0.357599) | 0.011422 / 0.007607 (0.003815) | 0.562966 / 0.226044 (0.336921) | 5.633712 / 2.268929 (3.364784) | 2.788622 / 55.444624 (-52.656002) | 2.438465 / 6.876477 (-4.438012) | 2.523479 / 2.142072 (0.381407) | 0.668730 / 4.805227 (-4.136498) | 0.143977 / 6.500664 (-6.356687) | 0.064661 / 0.075469 (-0.010808) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291708 / 1.841788 (-0.550080) | 15.573316 / 8.074308 (7.499008) | 14.435099 / 10.191392 (4.243707) | 0.147745 / 0.680424 (-0.532679) | 0.017602 / 0.534201 (-0.516599) | 0.401560 / 0.579283 (-0.177723) | 0.429861 / 0.434364 (-0.004502) | 0.469800 / 0.540337 (-0.070538) | 0.567515 / 1.386936 (-0.819421) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79c340f5dcfd06340f180f6c6ea2d5ef81f49d98 \"CML watermark\")\n" ]
"2023-06-20T14:28:35Z"
"2023-06-21T13:39:50Z"
"2023-06-21T13:32:22Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5969.diff", "html_url": "https://github.com/huggingface/datasets/pull/5969", "merged_at": "2023-06-21T13:32:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/5969.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5969" }
"Requested" in https://discuss.huggingface.co/t/utf-16-for-datasets/43828/3. `pd.read_json` also has these parameters, so it makes sense to be consistent.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5969/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5969/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3604/comments
https://api.github.com/repos/huggingface/datasets/issues/3604/events
https://github.com/huggingface/datasets/issues/3604
1,108,477,316
I_kwDODunzps5CEgWE
3,604
Dataset Viewer not showing Previews for Private Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abidlabs", "id": 1778297, "login": "abidlabs", "node_id": "MDQ6VXNlcjE3NzgyOTc=", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "repos_url": "https://api.github.com/users/abidlabs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "type": "User", "url": "https://api.github.com/users/abidlabs" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Sure, it's on the roadmap.", "Closing in favor of https://github.com/huggingface/datasets-server/issues/39." ]
"2022-01-19T19:29:26Z"
"2022-09-26T08:04:43Z"
"2022-09-26T08:04:43Z"
MEMBER
null
null
null
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets. ![image](https://user-images.githubusercontent.com/1778297/150200515-93ff1545-11fd-4793-be64-6bed3cd895e2.png) **Link:** [1] https://huggingface.co/datasets/abidlabs/test-audio-13 **Am I the one who added this dataset?** Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3604/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
"2023-08-03T14:46:04Z"
"2023-08-03T14:56:59Z"
"2023-08-03T14:46:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "html_url": "https://github.com/huggingface/datasets/pull/6117", "merged_at": "2023-08-03T14:46:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5451/comments
https://api.github.com/repos/huggingface/datasets/issues/5451/events
https://github.com/huggingface/datasets/issues/5451
1,552,336,300
I_kwDODunzps5chsWs
5,451
ImageFolder BadZipFile: Bad offset for central directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4", "events_url": "https://api.github.com/users/hmartiro/events{/privacy}", "followers_url": "https://api.github.com/users/hmartiro/followers", "following_url": "https://api.github.com/users/hmartiro/following{/other_user}", "gists_url": "https://api.github.com/users/hmartiro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hmartiro", "id": 1524208, "login": "hmartiro", "node_id": "MDQ6VXNlcjE1MjQyMDg=", "organizations_url": "https://api.github.com/users/hmartiro/orgs", "received_events_url": "https://api.github.com/users/hmartiro/received_events", "repos_url": "https://api.github.com/users/hmartiro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hmartiro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmartiro/subscriptions", "type": "User", "url": "https://api.github.com/users/hmartiro" }
[]
closed
false
null
[]
null
[ "Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640", "The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.", "For others that find this issue following a `BadZipFile` error, I had the same problem because I had a file in a folder dataset `my-image.target` and the datasets library was incorrectly determining that the (PNG) file was a zip archive. When it tried to extract the file, this error occurred. \r\n\r\nUpdating to `datasets==2.12.0` fixed the problem for me." ]
"2023-01-22T23:50:12Z"
"2023-05-23T10:35:48Z"
"2023-02-10T16:31:36Z"
NONE
null
null
null
### Describe the bug I'm getting the following exception: ``` lib/python3.10/zipfile.py:1353 in _RealGetContents │ │ │ │ 1350 │ │ # self.start_dir: Position of start of central directory │ │ 1351 │ │ self.start_dir = offset_cd + concat │ │ 1352 │ │ if self.start_dir < 0: │ │ ❱ 1353 │ │ │ raise BadZipFile("Bad offset for central directory") │ │ 1354 │ │ fp.seek(self.start_dir, 0) │ │ 1355 │ │ data = fp.read(size_cd) │ │ 1356 │ │ fp = io.BytesIO(data) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ BadZipFile: Bad offset for central directory Extracting data files: 35%|█████████████████▊ | 38572/110812 [00:10<00:20, 3576.26it/s] ``` ### Steps to reproduce the bug ``` load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ), ``` ### Expected behavior loads the dataset ### Environment info datasets==2.8.0 Python 3.10.8 Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5451/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4810/comments
https://api.github.com/repos/huggingface/datasets/issues/4810/events
https://github.com/huggingface/datasets/pull/4810
1,333,038,702
PR_kwDODunzps484C9l
4,810
Add description to hellaswag dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Are the `metadata JSON file` not on their way to deprecation? 😆😇\r\n\r\nIMO, more generally than this particular PR, the contribution process should be simplified now that many validation checks happen on the hub side.\r\n\r\nKeeping this open in the meantime to get more potential feedback!" ]
"2022-08-09T10:21:14Z"
"2022-09-23T11:35:38Z"
"2022-09-23T11:33:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4810.diff", "html_url": "https://github.com/huggingface/datasets/pull/4810", "merged_at": "2022-09-23T11:33:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4810" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4810/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1250/comments
https://api.github.com/repos/huggingface/datasets/issues/1250/events
https://github.com/huggingface/datasets/pull/1250
758,491,704
MDExOlB1bGxSZXF1ZXN0NTMzNjU2NTI4
1,250
added Nergrit dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
null
[]
null
[]
"2020-12-07T13:06:12Z"
"2020-12-08T14:33:29Z"
"2020-12-08T14:33:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1250.diff", "html_url": "https://github.com/huggingface/datasets/pull/1250", "merged_at": "2020-12-08T14:33:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1250.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1250" }
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1250/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1250/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3653/comments
https://api.github.com/repos/huggingface/datasets/issues/3653/events
https://github.com/huggingface/datasets/issues/3653
1,119,186,952
I_kwDODunzps5CtXAI
3,653
`to_json` in multiprocessing fashion sometimes deadlock
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
"2022-01-31T09:35:07Z"
"2022-01-31T09:35:07Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs.python.org/issue22393#msg315684 where `multiprocessing` fails to raise the OOM exception. One suggested alternative is not use `concurrent.futures` instead. ## Steps to reproduce the bug ## Expected results Script fails when one worker hits OOM, and raise appropriate error. ## Actual results Deadlock ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.1 - Platform: Linux - Python version: 3.8 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3653/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2613/comments
https://api.github.com/repos/huggingface/datasets/issues/2613/events
https://github.com/huggingface/datasets/pull/2613
940,759,852
MDExOlB1bGxSZXF1ZXN0Njg2Nzg0MzY0
2,613
Use ndarray.item instead of ndarray.tolist
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[]
"2021-07-09T13:19:35Z"
"2021-07-12T14:12:57Z"
"2021-07-09T13:50:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2613.diff", "html_url": "https://github.com/huggingface/datasets/pull/2613", "merged_at": "2021-07-09T13:50:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2613.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2613" }
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2613/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2613/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5864/comments
https://api.github.com/repos/huggingface/datasets/issues/5864/events
https://github.com/huggingface/datasets/issues/5864
1,710,450,047
I_kwDODunzps5l82V_
5,864
Slow iteration over Torch tensors
{ "avatar_url": "https://avatars.githubusercontent.com/u/51738205?v=4", "events_url": "https://api.github.com/users/crisostomi/events{/privacy}", "followers_url": "https://api.github.com/users/crisostomi/followers", "following_url": "https://api.github.com/users/crisostomi/following{/other_user}", "gists_url": "https://api.github.com/users/crisostomi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/crisostomi", "id": 51738205, "login": "crisostomi", "node_id": "MDQ6VXNlcjUxNzM4MjA1", "organizations_url": "https://api.github.com/users/crisostomi/orgs", "received_events_url": "https://api.github.com/users/crisostomi/received_events", "repos_url": "https://api.github.com/users/crisostomi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/crisostomi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/crisostomi/subscriptions", "type": "User", "url": "https://api.github.com/users/crisostomi" }
[]
open
false
null
[]
null
[ "I am highly interested performance of dataset so I ran your example as a curious user.\r\n```python\r\ntrain_dataset.cast_column(\"x\", Array3D(shape=img_shape, dtype=\"float32\"))\r\n```\r\nhave return values and \"x\" is a new column, it shoulde be\r\n```python\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\n```\r\nI rewrite your example as\r\n```python\r\ntrain_dataset = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\ntransform_func = torchvision.transforms.Compose([\r\n ToTensor(), \r\n Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] \r\n)\r\n \r\ntrain_dataset = train_dataset.map(\r\n desc=f\"Preprocessing samples\",\r\n function=lambda x: {\"img\": transform_func(x[\"img\"])},\r\n)\r\nds=train_dataset.cast_column(\"img\", Array3D(shape=(3,32,32), dtype=\"float32\"))\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nthat require ~11s in my environment. While\r\n```python\r\nds = load_dataset(\r\n 'cifar100',\r\n split='train',\r\n use_auth_token=True,\r\n)\r\n\r\nfor i in tqdm(ds):\r\n pass\r\n```\r\nonly need ~6s. (So I guess it's still undesirable)" ]
"2023-05-15T16:43:58Z"
"2023-05-16T03:27:38Z"
null
NONE
null
null
null
### Describe the bug I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vanilla input and ~30s after the transformation. ### Steps to reproduce the bug Here is the minimum code to reproduce the problem ```python import numpy as np from datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features from torch.utils.data import DataLoader from tqdm import tqdm import torchvision from torchvision.transforms import ToTensor, Normalize ################################# # Without transform ################################# train_dataset = load_dataset( 'cifar100', split='train', use_auth_token=True, ) train_dataset.set_format(type="numpy", columns=["img", "fine_label"]) train_loader= DataLoader( train_dataset, batch_size=100, pin_memory=False, shuffle=True, num_workers=8, ) for batch in tqdm(train_loader, desc="Loading data, no transform"): pass ################################# # With transform ################################# transform_func = torchvision.transforms.Compose([ ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),] ) train_dataset = train_dataset.map( desc=f"Preprocessing samples", function=lambda x: {"img": transform_func(x["img"])}, ) train_dataset.set_format(type="numpy", columns=["img", "fine_label"]) train_loader= DataLoader( train_dataset, batch_size=100, pin_memory=False, shuffle=True, num_workers=8, ) for batch in tqdm(train_loader, desc="Loading data after transform"): pass ``` I have also tried converting the Image column to an Array3D ```python img_shape = train_dataset[0]["img"].shape features = train_dataset.features.copy() features["x"] = Array3D(shape=img_shape, dtype="float32") train_dataset = train_dataset.map( desc=f"Preprocessing samples", function=lambda x: {"x": np.array(x["img"], dtype=np.uint8)}, features=features, ) train_dataset.cast_column("x", Array3D(shape=img_shape, dtype="float32")) train_dataset.set_format(type="numpy", columns=["x", "fine_label"]) ``` but to no avail. Any clue? ### Expected behavior The iteration should take approximately the same time with or without the transformation, as it doesn't change the shape of the input. What may be the issue here? ### Environment info ``` - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1 ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5864/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2230/comments
https://api.github.com/repos/huggingface/datasets/issues/2230/events
https://github.com/huggingface/datasets/issues/2230
859,817,159
MDU6SXNzdWU4NTk4MTcxNTk=
2,230
Keys yielded while generating dataset are not being checked
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?", "Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps://github.com/huggingface/datasets/blob/fcd3c3c8e3b1d9a2f3686a496082e21f06591380/src/datasets/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!", "Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n", "@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps://github.com/huggingface/datasets/blob/6775661b19d2ec339784f3d84553a3996a1d86c3/src/datasets/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)", "When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.", "Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That’s how I had it in mind originally.", "Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n", "In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?", "Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!" ]
"2021-04-16T13:29:47Z"
"2021-05-10T17:31:21Z"
"2021-05-10T17:31:21Z"
CONTRIBUTOR
null
null
null
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not. Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Even after having a tuple as key, the dataset is generated without any warning. Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example): ``` >>> import datasets >>> nik = datasets.load_dataset('anli') Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299... 0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''} 2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''} 1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''} 1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''} 1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''} ``` Here also, the dataset was generated successfuly even hough it had same keys without any warning. The reason appears to stem from here: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988 Here, although it has access to every key, but it is not being checked and the example is written directly: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992 I would like to take this issue if you allow me. Thank You!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2230/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5670/comments
https://api.github.com/repos/huggingface/datasets/issues/5670/events
https://github.com/huggingface/datasets/issues/5670
1,640,607,045
I_kwDODunzps5hya1F
5,670
Unable to load multi class classification datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/19690506?v=4", "events_url": "https://api.github.com/users/ysahil97/events{/privacy}", "followers_url": "https://api.github.com/users/ysahil97/followers", "following_url": "https://api.github.com/users/ysahil97/following{/other_user}", "gists_url": "https://api.github.com/users/ysahil97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysahil97", "id": 19690506, "login": "ysahil97", "node_id": "MDQ6VXNlcjE5NjkwNTA2", "organizations_url": "https://api.github.com/users/ysahil97/orgs", "received_events_url": "https://api.github.com/users/ysahil97/received_events", "repos_url": "https://api.github.com/users/ysahil97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysahil97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysahil97/subscriptions", "type": "User", "url": "https://api.github.com/users/ysahil97" }
[]
closed
false
null
[]
null
[ "Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)", "Thanks @lhoestq!\r\n\r\nI'll close this issue now." ]
"2023-03-25T18:06:15Z"
"2023-03-27T22:54:56Z"
"2023-03-27T22:54:56Z"
NONE
null
null
null
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting the following error snippet. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[44], line 3 1 from datasets import load_dataset ----> 3 imdb_dataset = load_dataset("yelp_review_full") 4 imdb_dataset File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1716 ignore_verifications = ignore_verifications or save_infos 1718 # Create a dataset builder -> 1719 builder_instance = load_dataset_builder( 1720 path=path, 1721 name=name, 1722 data_dir=data_dir, 1723 data_files=data_files, 1724 cache_dir=cache_dir, 1725 features=features, 1726 download_config=download_config, 1727 download_mode=download_mode, 1728 revision=revision, 1729 use_auth_token=use_auth_token, 1730 **config_kwargs, 1731 ) 1733 # Return iterable dataset in case of streaming 1734 if streaming: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1520 raise ValueError(error_msg) 1522 # Instantiate the dataset builder -> 1523 builder_instance: DatasetBuilder = builder_cls( 1524 cache_dir=cache_dir, 1525 config_name=config_name, 1526 data_dir=data_dir, 1527 data_files=data_files, 1528 hash=hash, 1529 features=features, 1530 use_auth_token=use_auth_token, 1531 **builder_kwargs, 1532 **config_kwargs, 1533 ) 1535 return builder_instance File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1291 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1292 super().__init__(*args, **kwargs) 1293 # Batch size used by the ArrowWriter 1294 # It defines the number of samples that are kept in memory before writing them 1295 # and also the length of the arrow chunks 1296 # None means that the ArrowWriter will use its default value 1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 309 # prepare info: DatasetInfo are a standardized dataclass across all datasets 310 # Prefill datasetinfo 311 if info is None: --> 312 info = self.get_exported_dataset_info() 313 info.update(self._info()) 314 info.builder_name = self.name File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self) 400 def get_exported_dataset_info(self) -> DatasetInfo: 401 """Empty DatasetInfo if doesn't exist 402 403 Example: (...) 410 ``` 411 """ --> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls) 385 @classmethod 386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict: 387 """Empty dict if doesn't exist 388 389 Example: (...) 396 ``` 397 """ --> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir) 368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md") 369 if "dataset_info" in dataset_metadata: --> 370 return cls.from_metadata(dataset_metadata) 371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)): 372 # this is just to have backward compatibility with dataset_infos.json files 373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f: File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata) 387 return cls( 388 { 389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict( (...) 393 } 394 ) 395 else: --> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"]) 397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default") 398 return cls({dataset_info.config_name: dataset_info}) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data) 330 yaml_data = copy.deepcopy(yaml_data) 331 if yaml_data.get("features") is not None: --> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) 333 if yaml_data.get("splits") is not None: 334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data) 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") -> 1745 return cls.from_dict(from_yaml_inner(yaml_data)) File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0) 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] -> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)} 1742 else: 1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}") File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1734 return {"_type": snakecase_to_camelcase(obj["dtype"])} 1735 else: -> 1736 return from_yaml_inner(obj["dtype"]) 1737 else: 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj) 1736 return from_yaml_inner(obj["dtype"]) 1737 else: -> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]} 1739 elif isinstance(obj, list): 1740 names = [_feature.pop("name") for _feature in obj] File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature) 1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict): 1705 label_ids = sorted(feature["class_label"]["names"]) -> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)): 1707 raise ValueError( 1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing." 1709 ) 1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids] TypeError: can only concatenate str (not "int") to str ``` The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue? ### Steps to reproduce the bug Run the following code snippet in a python script/ notebook cell: ``` from datasets import load_dataset yelp_dataset = load_dataset("yelp_review_full") yelp_dataset ``` ### Expected behavior The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5670/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1256/comments
https://api.github.com/repos/huggingface/datasets/issues/1256/events
https://github.com/huggingface/datasets/pull/1256
758,531,980
MDExOlB1bGxSZXF1ZXN0NTMzNjkwMTQ2
1,256
adding LiMiT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[]
"2020-12-07T14:00:41Z"
"2020-12-08T14:58:28Z"
"2020-12-08T14:42:51Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1256.diff", "html_url": "https://github.com/huggingface/datasets/pull/1256", "merged_at": "2020-12-08T14:42:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1256.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1256" }
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1256/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5212/comments
https://api.github.com/repos/huggingface/datasets/issues/5212/events
https://github.com/huggingface/datasets/pull/5212
1,439,642,483
PR_kwDODunzps5CZPI2
5,212
Fix CI require_beam maximum compatible dill version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint." ]
"2022-11-08T07:30:01Z"
"2022-11-15T06:32:27Z"
"2022-11-15T06:32:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5212.diff", "html_url": "https://github.com/huggingface/datasets/pull/5212", "merged_at": "2022-11-15T06:32:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/5212.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5212" }
A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`: - d7c942228b8dcf4de64b00a3053dce59b335f618 - ec222b220b79f10c8d7b015769f0999b15959feb This PR fixes the maximum compatible `dill` version with `apache-beam`, which is <0.3.2 (and not 0.3.6): https://github.com/apache/beam/blob/v2.42.0/sdks/python/setup.py#L219
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5212/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5212/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/897/comments
https://api.github.com/repos/huggingface/datasets/issues/897/events
https://github.com/huggingface/datasets/issues/897
752,100,256
MDU6SXNzdWU3NTIxMDAyNTY=
897
Dataset viewer issues
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?", "Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time", "9", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ ", "‏⠀‏‏‏⠀‏‏‏⠀ ‏⠀ " ]
"2020-11-27T09:14:34Z"
"2021-10-31T09:12:01Z"
"2021-10-31T09:12:01Z"
CONTRIBUTOR
null
null
null
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5551/comments
https://api.github.com/repos/huggingface/datasets/issues/5551/events
https://github.com/huggingface/datasets/pull/5551
1,592,140,836
PR_kwDODunzps5KXCof
5,551
Suggest scikit-learn instead of sklearn
{ "avatar_url": "https://avatars.githubusercontent.com/u/74963545?v=4", "events_url": "https://api.github.com/users/osbm/events{/privacy}", "followers_url": "https://api.github.com/users/osbm/followers", "following_url": "https://api.github.com/users/osbm/following{/other_user}", "gists_url": "https://api.github.com/users/osbm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osbm", "id": 74963545, "login": "osbm", "node_id": "MDQ6VXNlcjc0OTYzNTQ1", "organizations_url": "https://api.github.com/users/osbm/orgs", "received_events_url": "https://api.github.com/users/osbm/received_events", "repos_url": "https://api.github.com/users/osbm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osbm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osbm/subscriptions", "type": "User", "url": "https://api.github.com/users/osbm" }
[]
closed
false
null
[]
null
[ "good catch!", "_The documentation is not available anymore as the PR was closed or merged._", "The test fail is unrelated to this PR and fixed on `main` - merging :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008942 / 0.011353 (-0.002411) | 0.004617 / 0.011008 (-0.006391) | 0.101310 / 0.038508 (0.062802) | 0.030997 / 0.023109 (0.007888) | 0.306292 / 0.275898 (0.030394) | 0.370533 / 0.323480 (0.047053) | 0.007318 / 0.007986 (-0.000667) | 0.003473 / 0.004328 (-0.000856) | 0.078557 / 0.004250 (0.074307) | 0.036312 / 0.037052 (-0.000740) | 0.308993 / 0.258489 (0.050504) | 0.344411 / 0.293841 (0.050570) | 0.034384 / 0.128546 (-0.094162) | 0.011631 / 0.075646 (-0.064016) | 0.323948 / 0.419271 (-0.095324) | 0.041176 / 0.043533 (-0.002357) | 0.302512 / 0.255139 (0.047373) | 0.322439 / 0.283200 (0.039239) | 0.088955 / 0.141683 (-0.052728) | 1.534918 / 1.452155 (0.082763) | 1.555803 / 1.492716 (0.063087) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195639 / 0.018006 (0.177633) | 0.423068 / 0.000490 (0.422579) | 0.004101 / 0.000200 (0.003901) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023691 / 0.037411 (-0.013721) | 0.100536 / 0.014526 (0.086011) | 0.108399 / 0.176557 (-0.068157) | 0.143515 / 0.737135 (-0.593620) | 0.111886 / 0.296338 (-0.184452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417519 / 0.215209 (0.202310) | 4.180463 / 2.077655 (2.102808) | 1.862511 / 1.504120 (0.358391) | 1.658724 / 1.541195 (0.117529) | 1.735847 / 1.468490 (0.267357) | 0.688257 / 4.584777 (-3.896520) | 3.447976 / 3.745712 (-0.297737) | 1.877939 / 5.269862 (-3.391922) | 1.157385 / 4.565676 (-3.408292) | 0.081418 / 0.424275 (-0.342857) | 0.012395 / 0.007607 (0.004788) | 0.518935 / 0.226044 (0.292891) | 5.220355 / 2.268929 (2.951427) | 2.308355 / 55.444624 (-53.136269) | 1.960026 / 6.876477 (-4.916450) | 2.013179 / 2.142072 (-0.128893) | 0.802850 / 4.805227 (-4.002377) | 0.146941 / 6.500664 (-6.353723) | 0.064080 / 0.075469 (-0.011389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284443 / 1.841788 (-0.557344) | 13.903755 / 8.074308 (5.829447) | 14.467101 / 10.191392 (4.275709) | 0.156813 / 0.680424 (-0.523611) | 0.028583 / 0.534201 (-0.505618) | 0.406349 / 0.579283 (-0.172934) | 0.413178 / 0.434364 (-0.021186) | 0.491283 / 0.540337 (-0.049055) | 0.571171 / 1.386936 (-0.815765) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006868 / 0.011353 (-0.004484) | 0.004593 / 0.011008 (-0.006416) | 0.077574 / 0.038508 (0.039066) | 0.027703 / 0.023109 (0.004593) | 0.342096 / 0.275898 (0.066198) | 0.378500 / 0.323480 (0.055020) | 0.005785 / 0.007986 (-0.002201) | 0.003342 / 0.004328 (-0.000986) | 0.076105 / 0.004250 (0.071855) | 0.040369 / 0.037052 (0.003317) | 0.343611 / 0.258489 (0.085122) | 0.391859 / 0.293841 (0.098018) | 0.032675 / 0.128546 (-0.095871) | 0.011623 / 0.075646 (-0.064023) | 0.086623 / 0.419271 (-0.332648) | 0.051955 / 0.043533 (0.008423) | 0.343425 / 0.255139 (0.088286) | 0.368887 / 0.283200 (0.085688) | 0.097117 / 0.141683 (-0.044566) | 1.499546 / 1.452155 (0.047391) | 1.593100 / 1.492716 (0.100383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193568 / 0.018006 (0.175562) | 0.409211 / 0.000490 (0.408722) | 0.003797 / 0.000200 (0.003597) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024982 / 0.037411 (-0.012430) | 0.101367 / 0.014526 (0.086841) | 0.108546 / 0.176557 (-0.068010) | 0.144402 / 0.737135 (-0.592733) | 0.112233 / 0.296338 (-0.184105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432820 / 0.215209 (0.217611) | 4.341045 / 2.077655 (2.263391) | 2.058326 / 1.504120 (0.554207) | 1.853913 / 1.541195 (0.312718) | 1.942436 / 1.468490 (0.473946) | 0.699130 / 4.584777 (-3.885647) | 3.392879 / 3.745712 (-0.352833) | 1.908277 / 5.269862 (-3.361585) | 1.177998 / 4.565676 (-3.387678) | 0.082700 / 0.424275 (-0.341576) | 0.012505 / 0.007607 (0.004898) | 0.526286 / 0.226044 (0.300242) | 5.279599 / 2.268929 (3.010670) | 2.505771 / 55.444624 (-52.938854) | 2.158460 / 6.876477 (-4.718016) | 2.211437 / 2.142072 (0.069365) | 0.802065 / 4.805227 (-4.003163) | 0.150766 / 6.500664 (-6.349898) | 0.067639 / 0.075469 (-0.007830) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286595 / 1.841788 (-0.555192) | 13.961894 / 8.074308 (5.887586) | 14.021865 / 10.191392 (3.830473) | 0.164590 / 0.680424 (-0.515834) | 0.016909 / 0.534201 (-0.517292) | 0.392215 / 0.579283 (-0.187069) | 0.408080 / 0.434364 (-0.026284) | 0.488247 / 0.540337 (-0.052090) | 0.575524 / 1.386936 (-0.811412) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#699b0293876015457bfce40f7245d346c34c7717 \"CML watermark\")\n" ]
"2023-02-20T16:16:57Z"
"2023-02-21T13:27:57Z"
"2023-02-21T13:21:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5551.diff", "html_url": "https://github.com/huggingface/datasets/pull/5551", "merged_at": "2023-02-21T13:21:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/5551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5551" }
This is kinda unimportant fix but, the suggested `pip install sklearn` does not work. The current error message if sklearn is not installed: ``` ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn. Please install it using 'pip install sklearn' for instance. ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5551/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4274/comments
https://api.github.com/repos/huggingface/datasets/issues/4274/events
https://github.com/huggingface/datasets/pull/4274
1,224,740,303
PR_kwDODunzps43Qm2w
4,274
Add API code examples for IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-03T22:44:17Z"
"2022-05-04T16:29:32Z"
"2022-05-04T16:22:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4274.diff", "html_url": "https://github.com/huggingface/datasets/pull/4274", "merged_at": "2022-05-04T16:22:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4274" }
This PR adds API code examples for `IterableDataset` and `IterableDatasetDicts`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4274/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4477/comments
https://api.github.com/repos/huggingface/datasets/issues/4477/events
https://github.com/huggingface/datasets/issues/4477
1,268,308,986
I_kwDODunzps5LmNv6
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
{ "avatar_url": "https://avatars.githubusercontent.com/u/42551754?v=4", "events_url": "https://api.github.com/users/AshTayade/events{/privacy}", "followers_url": "https://api.github.com/users/AshTayade/followers", "following_url": "https://api.github.com/users/AshTayade/following{/other_user}", "gists_url": "https://api.github.com/users/AshTayade/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AshTayade", "id": 42551754, "login": "AshTayade", "node_id": "MDQ6VXNlcjQyNTUxNzU0", "organizations_url": "https://api.github.com/users/AshTayade/orgs", "received_events_url": "https://api.github.com/users/AshTayade/received_events", "repos_url": "https://api.github.com/users/AshTayade/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AshTayade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AshTayade/subscriptions", "type": "User", "url": "https://api.github.com/users/AshTayade" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ", "Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc." ]
"2022-06-11T15:49:17Z"
"2022-07-18T13:07:33Z"
"2022-07-18T13:07:33Z"
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4477/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2765/comments
https://api.github.com/repos/huggingface/datasets/issues/2765/events
https://github.com/huggingface/datasets/issues/2765
962,861,395
MDU6SXNzdWU5NjI4NjEzOTU=
2,765
BERTScore Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gagan3012", "id": 49101362, "login": "gagan3012", "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "repos_url": "https://api.github.com/users/gagan3012/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "type": "User", "url": "https://api.github.com/users/gagan3012" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\n```" ]
"2021-08-06T15:58:57Z"
"2021-08-09T11:16:25Z"
"2021-08-09T11:16:25Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2765/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2765/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2901/comments
https://api.github.com/repos/huggingface/datasets/issues/2901/events
https://github.com/huggingface/datasets/issues/2901
995,232,844
MDU6SXNzdWU5OTUyMzI4NDQ=
2,901
Incompatibility with pytest
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
"2021-09-13T19:12:17Z"
"2021-09-14T08:40:47Z"
"2021-09-14T08:40:47Z"
CONTRIBUTOR
null
null
null
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2901/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1686/comments
https://api.github.com/repos/huggingface/datasets/issues/1686/events
https://github.com/huggingface/datasets/issues/1686
778,921,684
MDU6SXNzdWU3Nzg5MjE2ODQ=
1,686
Dataset Error: DaNE contains empty samples at the end
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, I opened a PR to fix that", "One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\n```", "If you have other questions feel free to reopen :) " ]
"2021-01-05T11:54:26Z"
"2021-01-05T14:01:09Z"
"2021-01-05T14:00:13Z"
CONTRIBUTOR
null
null
null
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []} >>> dataset["train"][-1] {'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []} ``` Best, Kenneth
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1686/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6053/comments
https://api.github.com/repos/huggingface/datasets/issues/6053/events
https://github.com/huggingface/datasets/issues/6053
1,812,635,902
I_kwDODunzps5sCqD-
6,053
Change package name from "datasets" to something less generic
{ "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/geajack", "id": 2124157, "login": "geajack", "node_id": "MDQ6VXNlcjIxMjQxNTc=", "organizations_url": "https://api.github.com/users/geajack/orgs", "received_events_url": "https://api.github.com/users/geajack/received_events", "repos_url": "https://api.github.com/users/geajack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "type": "User", "url": "https://api.github.com/users/geajack" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "This would break a lot of existing code, so we can't really do this." ]
"2023-07-19T19:53:28Z"
"2023-10-03T16:04:09Z"
"2023-10-03T16:04:09Z"
NONE
null
null
null
### Feature request I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude. My preference would be a pattern like what you get with all the other big libraries like numpy or pandas: ``` import huggingface as hf # hf.transformers, hf.datasets, hf.evaluate ``` or things like ``` import huggingface.transformers as tf # tf.load_model(), etc ``` If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on. I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this. Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name". Sister issues: - [transformers](https://github.com/huggingface/transformers/issues/24934) - **datasets** - [evaluate](https://github.com/huggingface/evaluate/issues/476) ### Motivation Not taking up package names the user is likely to want to use. ### Your contribution No - more a matter of internal discussion among core library authors.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6053/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/4414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4414/comments
https://api.github.com/repos/huggingface/datasets/issues/4414/events
https://github.com/huggingface/datasets/pull/4414
1,250,546,888
PR_kwDODunzps44klhY
4,414
Rename DatasetBuilder config_name
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-27T09:28:02Z"
"2022-05-31T15:07:21Z"
"2022-05-31T14:58:51Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4414.diff", "html_url": "https://github.com/huggingface/datasets/pull/4414", "merged_at": "2022-05-31T14:58:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4414.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4414" }
This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that: - it avoids confusion with the attribute `DatasetBuilder.name`, which is different - it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name` Other simpler possibility could be to rename it to just `config` instead. Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library. It would have a major impact to rename it also in: - load_dataset - load_dataset_builder: although this could also be assumed as inners... - in our CLI commands Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release): ``` load_dataset(dataset, config,... ``` instead of ``` load_dataset(path, name,... ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4414/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5113/comments
https://api.github.com/repos/huggingface/datasets/issues/5113/events
https://github.com/huggingface/datasets/pull/5113
1,409,207,607
PR_kwDODunzps5Az0Ei
5,113
Fix filter indices when batched
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think a patch release will be necessary.", "I'm also fixing https://github.com/huggingface/datasets/issues/5111 which will lalso require a patch release" ]
"2022-10-14T11:30:03Z"
"2022-10-24T06:21:09Z"
"2022-10-14T12:11:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5113.diff", "html_url": "https://github.com/huggingface/datasets/pull/5113", "merged_at": "2022-10-14T12:11:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5113.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5113" }
This PR fixes a bug introduced by: - #5030 Fix #5112.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5113/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/325/comments
https://api.github.com/repos/huggingface/datasets/issues/325/events
https://github.com/huggingface/datasets/pull/325
647,601,592
MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw
325
Add SQuADShifts dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8953195?v=4", "events_url": "https://api.github.com/users/millerjohnp/events{/privacy}", "followers_url": "https://api.github.com/users/millerjohnp/followers", "following_url": "https://api.github.com/users/millerjohnp/following{/other_user}", "gists_url": "https://api.github.com/users/millerjohnp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/millerjohnp", "id": 8953195, "login": "millerjohnp", "node_id": "MDQ6VXNlcjg5NTMxOTU=", "organizations_url": "https://api.github.com/users/millerjohnp/orgs", "received_events_url": "https://api.github.com/users/millerjohnp/received_events", "repos_url": "https://api.github.com/users/millerjohnp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/millerjohnp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/millerjohnp/subscriptions", "type": "User", "url": "https://api.github.com/users/millerjohnp" }
[]
closed
false
null
[]
null
[ "Very cool to have this dataset, thank you for adding it :)" ]
"2020-06-29T19:11:16Z"
"2020-06-30T17:07:31Z"
"2020-06-30T17:07:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "html_url": "https://github.com/huggingface/datasets/pull/325", "merged_at": "2020-06-30T17:07:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/325" }
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/325/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4360/comments
https://api.github.com/repos/huggingface/datasets/issues/4360/events
https://github.com/huggingface/datasets/pull/4360
1,237,239,096
PR_kwDODunzps434izs
4,360
Fix example in opus_ubuntu, Add license info
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz" }
[]
closed
false
null
[]
null
[ "CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-16T14:22:28Z"
"2022-06-01T13:06:07Z"
"2022-06-01T12:57:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4360.diff", "html_url": "https://github.com/huggingface/datasets/pull/4360", "merged_at": "2022-06-01T12:57:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4360.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4360" }
This PR * fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu` * adds the declared license info for this corpus' origin * adds an example instance * updates the data origin type
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4360/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4360/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5156/comments
https://api.github.com/repos/huggingface/datasets/issues/5156/events
https://github.com/huggingface/datasets/issues/5156
1,421,667,125
I_kwDODunzps5UvOs1
5,156
Unable to download dataset using Azure Data Lake Gen 2
{ "avatar_url": "https://avatars.githubusercontent.com/u/87379512?v=4", "events_url": "https://api.github.com/users/clarissesimoes/events{/privacy}", "followers_url": "https://api.github.com/users/clarissesimoes/followers", "following_url": "https://api.github.com/users/clarissesimoes/following{/other_user}", "gists_url": "https://api.github.com/users/clarissesimoes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clarissesimoes", "id": 87379512, "login": "clarissesimoes", "node_id": "MDQ6VXNlcjg3Mzc5NTEy", "organizations_url": "https://api.github.com/users/clarissesimoes/orgs", "received_events_url": "https://api.github.com/users/clarissesimoes/received_events", "repos_url": "https://api.github.com/users/clarissesimoes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clarissesimoes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clarissesimoes/subscriptions", "type": "User", "url": "https://api.github.com/users/clarissesimoes" }
[]
closed
false
null
[]
null
[ "Hi ! From the `adlfs` docs, there are two filesystems you can use:\r\n> To use the Gen1 filesystem:\r\n> - known_implementations[‘adl’] = {‘class’: ‘adlfs.AzureDatalakeFileSystem’}\r\n> \r\n> To use the Gen2 filesystem:\r\n> - known_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n\r\nIf I'm not mistaken you're using the second one - so you should use `abfs://` instead of `adl://`, and also run this at the beginning of your script:\r\n```python\r\nfrom fsspec.registry import known_implementations\r\nknown_implementations['abfs'] = {'class': 'adlfs.AzureDatalakeFileSystem'}\r\n```\r\n\r\n", "Thank you @lhoestq . Great call.\r\nUsing the default class from `known_implementations` dict solved my problem\r\n```\r\nknown_implementations[‘abfs’] = {‘class’: ‘adlfs.AzureBlobFileSystem’}\r\n```\r\nI'm closing this issue." ]
"2022-10-25T00:43:18Z"
"2022-11-17T23:37:09Z"
"2022-11-17T23:37:08Z"
NONE
null
null
null
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in main builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths fs = cls(**options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'account_name' ``` If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1: `TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'` Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2. ### Steps to reproduce the bug Assuming that you have an account on Azure and at Storage Account that can be used for reproduce: 1. Create a dict with the format to connect to Azure Data Lake Gen 2 ``` storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem ``` 2. Create a dataset builder for any HF hosted dataset ``` builder = load_dataset_builder(dataset_name) ``` 3. Try to download the dataset passing the storage_options as an argument ``` save_dir = 'adl://my_save_dir' builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") ``` ### Expected behavior Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5156/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5156/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1360/comments
https://api.github.com/repos/huggingface/datasets/issues/1360/events
https://github.com/huggingface/datasets/pull/1360
760,088,419
MDExOlB1bGxSZXF1ZXN0NTM0OTc4NzM0
1,360
add wisesight1000
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[]
"2020-12-09T07:41:30Z"
"2020-12-10T14:28:41Z"
"2020-12-10T14:28:41Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1360.diff", "html_url": "https://github.com/huggingface/datasets/pull/1360", "merged_at": "2020-12-10T14:28:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/1360.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1360" }
`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1360/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1360/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4937/comments
https://api.github.com/repos/huggingface/datasets/issues/4937/events
https://github.com/huggingface/datasets/pull/4937
1,363,426,946
PR_kwDODunzps4-cn6W
4,937
Remove deprecated identical_ok
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-06T15:01:24Z"
"2022-09-06T22:24:09Z"
"2022-09-06T22:21:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4937.diff", "html_url": "https://github.com/huggingface/datasets/pull/4937", "merged_at": "2022-09-06T22:21:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/4937.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4937" }
`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed: ```python Args: ... identical_ok (`bool`, *optional*, defaults to `True`): Deprecated: will be removed in 0.11.0. Changing this value has no effect. ... ``` There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same. cc @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4937/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6459/comments
https://api.github.com/repos/huggingface/datasets/issues/6459/events
https://github.com/huggingface/datasets/pull/6459
2,017,029,380
PR_kwDODunzps5gsWlz
6,459
Retrieve cached datasets that were pushed to hub when offline
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005292 / 0.011353 (-0.006061) | 0.003811 / 0.011008 (-0.007197) | 0.064912 / 0.038508 (0.026404) | 0.061199 / 0.023109 (0.038090) | 0.242953 / 0.275898 (-0.032945) | 0.271789 / 0.323480 (-0.051691) | 0.003994 / 0.007986 (-0.003991) | 0.002723 / 0.004328 (-0.001606) | 0.049952 / 0.004250 (0.045701) | 0.039489 / 0.037052 (0.002437) | 0.261143 / 0.258489 (0.002654) | 0.288800 / 0.293841 (-0.005041) | 0.028130 / 0.128546 (-0.100416) | 0.010724 / 0.075646 (-0.064922) | 0.208218 / 0.419271 (-0.211054) | 0.036224 / 0.043533 (-0.007309) | 0.247189 / 0.255139 (-0.007950) | 0.274702 / 0.283200 (-0.008498) | 0.019714 / 0.141683 (-0.121969) | 1.134853 / 1.452155 (-0.317301) | 1.192655 / 1.492716 (-0.300062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096391 / 0.018006 (0.078385) | 0.303802 / 0.000490 (0.303312) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019530 / 0.037411 (-0.017881) | 0.061588 / 0.014526 (0.047062) | 0.075122 / 0.176557 (-0.101434) | 0.120980 / 0.737135 (-0.616155) | 0.075807 / 0.296338 (-0.220532) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281672 / 0.215209 (0.066463) | 2.779884 / 2.077655 (0.702229) | 1.502026 / 1.504120 (-0.002094) | 1.369474 / 1.541195 (-0.171721) | 1.402694 / 1.468490 (-0.065796) | 0.559120 / 4.584777 (-4.025657) | 2.355320 / 3.745712 (-1.390393) | 2.823987 / 5.269862 (-2.445875) | 1.763888 / 4.565676 (-2.801788) | 0.061715 / 0.424275 (-0.362560) | 0.005015 / 0.007607 (-0.002592) | 0.342669 / 0.226044 (0.116625) | 3.360651 / 2.268929 (1.091722) | 1.887277 / 55.444624 (-53.557348) | 1.555613 / 6.876477 (-5.320864) | 1.614126 / 2.142072 (-0.527946) | 0.643797 / 4.805227 (-4.161430) | 0.118365 / 6.500664 (-6.382299) | 0.042596 / 0.075469 (-0.032873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951383 / 1.841788 (-0.890405) | 13.169812 / 8.074308 (5.095504) | 10.772460 / 10.191392 (0.581068) | 0.133248 / 0.680424 (-0.547176) | 0.014597 / 0.534201 (-0.519604) | 0.289758 / 0.579283 (-0.289525) | 0.266324 / 0.434364 (-0.168040) | 0.334811 / 0.540337 (-0.205526) | 0.445566 / 1.386936 (-0.941370) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005668 / 0.011353 (-0.005684) | 0.003583 / 0.011008 (-0.007425) | 0.050681 / 0.038508 (0.012173) | 0.063244 / 0.023109 (0.040135) | 0.279624 / 0.275898 (0.003726) | 0.308030 / 0.323480 (-0.015450) | 0.004160 / 0.007986 (-0.003826) | 0.002633 / 0.004328 (-0.001696) | 0.048475 / 0.004250 (0.044225) | 0.043106 / 0.037052 (0.006054) | 0.283678 / 0.258489 (0.025189) | 0.309730 / 0.293841 (0.015889) | 0.030290 / 0.128546 (-0.098256) | 0.011112 / 0.075646 (-0.064534) | 0.058234 / 0.419271 (-0.361038) | 0.033553 / 0.043533 (-0.009979) | 0.279902 / 0.255139 (0.024763) | 0.298041 / 0.283200 (0.014841) | 0.019367 / 0.141683 (-0.122316) | 1.142438 / 1.452155 (-0.309717) | 1.197305 / 1.492716 (-0.295411) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090875 / 0.018006 (0.072869) | 0.301174 / 0.000490 (0.300685) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021544 / 0.037411 (-0.015867) | 0.071371 / 0.014526 (0.056846) | 0.080821 / 0.176557 (-0.095736) | 0.120054 / 0.737135 (-0.617082) | 0.082611 / 0.296338 (-0.213728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293787 / 0.215209 (0.078578) | 2.862610 / 2.077655 (0.784955) | 1.597282 / 1.504120 (0.093162) | 1.485094 / 1.541195 (-0.056101) | 1.507384 / 1.468490 (0.038893) | 0.558470 / 4.584777 (-4.026307) | 2.414137 / 3.745712 (-1.331575) | 2.863342 / 5.269862 (-2.406520) | 1.776973 / 4.565676 (-2.788704) | 0.062296 / 0.424275 (-0.361979) | 0.004954 / 0.007607 (-0.002653) | 0.346037 / 0.226044 (0.119993) | 3.441864 / 2.268929 (1.172935) | 1.969842 / 55.444624 (-53.474783) | 1.714878 / 6.876477 (-5.161599) | 1.738141 / 2.142072 (-0.403931) | 0.645929 / 4.805227 (-4.159298) | 0.117332 / 6.500664 (-6.383332) | 0.041963 / 0.075469 (-0.033507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983229 / 1.841788 (-0.858559) | 13.186932 / 8.074308 (5.112624) | 11.220549 / 10.191392 (1.029157) | 0.142105 / 0.680424 (-0.538319) | 0.015210 / 0.534201 (-0.518991) | 0.290055 / 0.579283 (-0.289228) | 0.274513 / 0.434364 (-0.159851) | 0.346834 / 0.540337 (-0.193504) | 0.575897 / 1.386936 (-0.811039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d3c0694d0c47a64a3cab5d468b4d9575ad7b1d96 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6459). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005308 / 0.011353 (-0.006045) | 0.003135 / 0.011008 (-0.007873) | 0.061820 / 0.038508 (0.023312) | 0.052005 / 0.023109 (0.028895) | 0.233507 / 0.275898 (-0.042391) | 0.257790 / 0.323480 (-0.065690) | 0.002848 / 0.007986 (-0.005138) | 0.002645 / 0.004328 (-0.001683) | 0.048379 / 0.004250 (0.044128) | 0.038320 / 0.037052 (0.001268) | 0.245470 / 0.258489 (-0.013019) | 0.274854 / 0.293841 (-0.018987) | 0.027335 / 0.128546 (-0.101211) | 0.010349 / 0.075646 (-0.065297) | 0.205872 / 0.419271 (-0.213400) | 0.035896 / 0.043533 (-0.007637) | 0.241645 / 0.255139 (-0.013494) | 0.260033 / 0.283200 (-0.023167) | 0.020325 / 0.141683 (-0.121358) | 1.116768 / 1.452155 (-0.335387) | 1.188067 / 1.492716 (-0.304649) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092622 / 0.018006 (0.074616) | 0.302663 / 0.000490 (0.302173) | 0.000227 / 0.000200 (0.000027) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018633 / 0.037411 (-0.018778) | 0.060117 / 0.014526 (0.045592) | 0.072713 / 0.176557 (-0.103844) | 0.119955 / 0.737135 (-0.617180) | 0.074698 / 0.296338 (-0.221640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277157 / 0.215209 (0.061948) | 2.699650 / 2.077655 (0.621995) | 1.413625 / 1.504120 (-0.090494) | 1.295900 / 1.541195 (-0.245295) | 1.306280 / 1.468490 (-0.162210) | 0.555354 / 4.584777 (-4.029423) | 2.386866 / 3.745712 (-1.358847) | 2.794069 / 5.269862 (-2.475793) | 1.736275 / 4.565676 (-2.829401) | 0.061812 / 0.424275 (-0.362464) | 0.004957 / 0.007607 (-0.002650) | 0.334533 / 0.226044 (0.108488) | 3.251096 / 2.268929 (0.982168) | 1.768193 / 55.444624 (-53.676431) | 1.473752 / 6.876477 (-5.402724) | 1.476320 / 2.142072 (-0.665753) | 0.642485 / 4.805227 (-4.162742) | 0.116986 / 6.500664 (-6.383678) | 0.042083 / 0.075469 (-0.033386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941364 / 1.841788 (-0.900424) | 11.587408 / 8.074308 (3.513100) | 10.500198 / 10.191392 (0.308806) | 0.129126 / 0.680424 (-0.551298) | 0.015206 / 0.534201 (-0.518995) | 0.286580 / 0.579283 (-0.292703) | 0.263566 / 0.434364 (-0.170798) | 0.331662 / 0.540337 (-0.208676) | 0.431423 / 1.386936 (-0.955513) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005151 / 0.011353 (-0.006202) | 0.003425 / 0.011008 (-0.007583) | 0.049301 / 0.038508 (0.010793) | 0.052005 / 0.023109 (0.028895) | 0.289594 / 0.275898 (0.013696) | 0.312630 / 0.323480 (-0.010849) | 0.003988 / 0.007986 (-0.003998) | 0.002705 / 0.004328 (-0.001624) | 0.048529 / 0.004250 (0.044279) | 0.039645 / 0.037052 (0.002592) | 0.293430 / 0.258489 (0.034941) | 0.311697 / 0.293841 (0.017856) | 0.029044 / 0.128546 (-0.099502) | 0.010282 / 0.075646 (-0.065364) | 0.057641 / 0.419271 (-0.361630) | 0.032733 / 0.043533 (-0.010800) | 0.293553 / 0.255139 (0.038414) | 0.308850 / 0.283200 (0.025651) | 0.018452 / 0.141683 (-0.123231) | 1.147931 / 1.452155 (-0.304224) | 1.173093 / 1.492716 (-0.319623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100862 / 0.018006 (0.082856) | 0.309286 / 0.000490 (0.308796) | 0.000223 / 0.000200 (0.000023) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021365 / 0.037411 (-0.016046) | 0.068987 / 0.014526 (0.054461) | 0.081092 / 0.176557 (-0.095465) | 0.119852 / 0.737135 (-0.617283) | 0.082850 / 0.296338 (-0.213489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288477 / 0.215209 (0.073268) | 2.833766 / 2.077655 (0.756111) | 1.576670 / 1.504120 (0.072550) | 1.431643 / 1.541195 (-0.109552) | 1.442132 / 1.468490 (-0.026358) | 0.556079 / 4.584777 (-4.028698) | 2.465042 / 3.745712 (-1.280670) | 2.786329 / 5.269862 (-2.483532) | 1.779428 / 4.565676 (-2.786249) | 0.062278 / 0.424275 (-0.361997) | 0.004867 / 0.007607 (-0.002740) | 0.348444 / 0.226044 (0.122399) | 3.389824 / 2.268929 (1.120896) | 1.919141 / 55.444624 (-53.525484) | 1.635411 / 6.876477 (-5.241066) | 1.654869 / 2.142072 (-0.487204) | 0.634467 / 4.805227 (-4.170761) | 0.114330 / 6.500664 (-6.386334) | 0.039900 / 0.075469 (-0.035569) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970851 / 1.841788 (-0.870937) | 11.951660 / 8.074308 (3.877352) | 10.571115 / 10.191392 (0.379723) | 0.131040 / 0.680424 (-0.549384) | 0.015299 / 0.534201 (-0.518902) | 0.287851 / 0.579283 (-0.291432) | 0.278366 / 0.434364 (-0.155998) | 0.326468 / 0.540337 (-0.213870) | 0.552288 / 1.386936 (-0.834648) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8214ff2a9f706427669a6c2a01ccabffa5bf0d2b \"CML watermark\")\n" ]
"2023-11-29T16:56:15Z"
"2023-12-13T13:54:48Z"
null
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6459.diff", "html_url": "https://github.com/huggingface/datasets/pull/6459", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6459" }
I drafted the logic to retrieve a no-script dataset in the cache. For example it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` and later, without connection: ```python >>> load_dataset("lhoestq/tmp") Using the latest cached version of the dataset from /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/*/*/0b3caccda1725efb(last modified on Wed Nov 29 16:50:27 2023) since it couldn't be found locally at lhoestq/tmp. DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` fix https://github.com/huggingface/datasets/issues/3547 ## Implementation details (EDITED) I continued in https://github.com/huggingface/datasets/pull/6493, see the changes there TODO: - [x] tests - [ ] compatible with https://github.com/huggingface/datasets/pull/6458
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6459/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5932/comments
https://api.github.com/repos/huggingface/datasets/issues/5932/events
https://github.com/huggingface/datasets/pull/5932
1,746,249,161
PR_kwDODunzps5Sbrzo
5,932
[doc build] Use secrets
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008499 / 0.011353 (-0.002854) | 0.006155 / 0.011008 (-0.004853) | 0.124032 / 0.038508 (0.085524) | 0.037337 / 0.023109 (0.014228) | 0.389274 / 0.275898 (0.113376) | 0.427736 / 0.323480 (0.104257) | 0.006929 / 0.007986 (-0.001057) | 0.005017 / 0.004328 (0.000689) | 0.096356 / 0.004250 (0.092105) | 0.055694 / 0.037052 (0.018642) | 0.391417 / 0.258489 (0.132928) | 0.448098 / 0.293841 (0.154257) | 0.042442 / 0.128546 (-0.086105) | 0.013456 / 0.075646 (-0.062190) | 0.423502 / 0.419271 (0.004230) | 0.062919 / 0.043533 (0.019386) | 0.384317 / 0.255139 (0.129178) | 0.410851 / 0.283200 (0.127652) | 0.112807 / 0.141683 (-0.028875) | 1.746050 / 1.452155 (0.293895) | 1.977974 / 1.492716 (0.485257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306382 / 0.018006 (0.288375) | 0.620310 / 0.000490 (0.619820) | 0.009309 / 0.000200 (0.009109) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026900 / 0.037411 (-0.010511) | 0.140125 / 0.014526 (0.125599) | 0.136295 / 0.176557 (-0.040261) | 0.207721 / 0.737135 (-0.529414) | 0.146328 / 0.296338 (-0.150011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616712 / 0.215209 (0.401503) | 6.237820 / 2.077655 (4.160166) | 2.503809 / 1.504120 (0.999689) | 2.129739 / 1.541195 (0.588544) | 2.160768 / 1.468490 (0.692277) | 0.971273 / 4.584777 (-3.613504) | 5.687161 / 3.745712 (1.941449) | 2.738148 / 5.269862 (-2.531713) | 1.692695 / 4.565676 (-2.872981) | 0.113701 / 0.424275 (-0.310574) | 0.014809 / 0.007607 (0.007202) | 0.774795 / 0.226044 (0.548750) | 7.660012 / 2.268929 (5.391083) | 3.253036 / 55.444624 (-52.191588) | 2.607498 / 6.876477 (-4.268979) | 2.681678 / 2.142072 (0.539606) | 1.095275 / 4.805227 (-3.709952) | 0.239078 / 6.500664 (-6.261586) | 0.081034 / 0.075469 (0.005565) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.574547 / 1.841788 (-0.267240) | 18.323566 / 8.074308 (10.249258) | 19.274482 / 10.191392 (9.083090) | 0.210275 / 0.680424 (-0.470149) | 0.031843 / 0.534201 (-0.502358) | 0.514843 / 0.579283 (-0.064440) | 0.633782 / 0.434364 (0.199418) | 0.588569 / 0.540337 (0.048232) | 0.721401 / 1.386936 (-0.665535) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008866 / 0.011353 (-0.002487) | 0.006460 / 0.011008 (-0.004548) | 0.121337 / 0.038508 (0.082829) | 0.033896 / 0.023109 (0.010786) | 0.455702 / 0.275898 (0.179804) | 0.509685 / 0.323480 (0.186205) | 0.007650 / 0.007986 (-0.000336) | 0.005578 / 0.004328 (0.001250) | 0.098505 / 0.004250 (0.094255) | 0.056122 / 0.037052 (0.019069) | 0.478483 / 0.258489 (0.219994) | 0.560008 / 0.293841 (0.266167) | 0.044926 / 0.128546 (-0.083620) | 0.014562 / 0.075646 (-0.061085) | 0.115027 / 0.419271 (-0.304244) | 0.066494 / 0.043533 (0.022961) | 0.463434 / 0.255139 (0.208296) | 0.513856 / 0.283200 (0.230656) | 0.126436 / 0.141683 (-0.015247) | 1.874729 / 1.452155 (0.422575) | 1.925080 / 1.492716 (0.432364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.012672 / 0.018006 (-0.005334) | 0.615797 / 0.000490 (0.615307) | 0.001606 / 0.000200 (0.001406) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031104 / 0.037411 (-0.006307) | 0.130107 / 0.014526 (0.115581) | 0.140587 / 0.176557 (-0.035970) | 0.205081 / 0.737135 (-0.532054) | 0.144068 / 0.296338 (-0.152270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646549 / 0.215209 (0.431340) | 6.403962 / 2.077655 (4.326307) | 2.812594 / 1.504120 (1.308474) | 2.478480 / 1.541195 (0.937285) | 2.552385 / 1.468490 (1.083895) | 0.991987 / 4.584777 (-3.592790) | 5.777917 / 3.745712 (2.032205) | 5.697830 / 5.269862 (0.427969) | 2.370583 / 4.565676 (-2.195094) | 0.109905 / 0.424275 (-0.314370) | 0.013801 / 0.007607 (0.006193) | 0.799932 / 0.226044 (0.573888) | 8.155672 / 2.268929 (5.886743) | 3.711662 / 55.444624 (-51.732963) | 3.042164 / 6.876477 (-3.834312) | 3.073549 / 2.142072 (0.931477) | 1.137515 / 4.805227 (-3.667712) | 0.231266 / 6.500664 (-6.269398) | 0.080893 / 0.075469 (0.005424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669210 / 1.841788 (-0.172577) | 18.747144 / 8.074308 (10.672836) | 21.084589 / 10.191392 (10.893197) | 0.241379 / 0.680424 (-0.439045) | 0.029473 / 0.534201 (-0.504728) | 0.524605 / 0.579283 (-0.054678) | 0.622852 / 0.434364 (0.188488) | 0.604941 / 0.540337 (0.064604) | 0.715978 / 1.386936 (-0.670958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#142484a60b1330359d7713e906fc9e5e30aa9f64 \"CML watermark\")\n", "Cool ! what about `.github/workflows/build_pr_documentation.yml` and `.github/workflows/delete_doc_comment.yml` ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005973 / 0.011353 (-0.005380) | 0.004389 / 0.011008 (-0.006620) | 0.096076 / 0.038508 (0.057568) | 0.031569 / 0.023109 (0.008460) | 0.328300 / 0.275898 (0.052402) | 0.359356 / 0.323480 (0.035876) | 0.005378 / 0.007986 (-0.002607) | 0.003703 / 0.004328 (-0.000625) | 0.075251 / 0.004250 (0.071000) | 0.042340 / 0.037052 (0.005287) | 0.346103 / 0.258489 (0.087614) | 0.379896 / 0.293841 (0.086055) | 0.027493 / 0.128546 (-0.101053) | 0.009033 / 0.075646 (-0.066613) | 0.327829 / 0.419271 (-0.091442) | 0.064074 / 0.043533 (0.020541) | 0.337703 / 0.255139 (0.082564) | 0.355335 / 0.283200 (0.072136) | 0.101179 / 0.141683 (-0.040504) | 1.471738 / 1.452155 (0.019584) | 1.539031 / 1.492716 (0.046315) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194097 / 0.018006 (0.176091) | 0.434190 / 0.000490 (0.433701) | 0.005730 / 0.000200 (0.005530) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025634 / 0.037411 (-0.011778) | 0.105080 / 0.014526 (0.090555) | 0.116508 / 0.176557 (-0.060049) | 0.173867 / 0.737135 (-0.563269) | 0.117749 / 0.296338 (-0.178590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401566 / 0.215209 (0.186357) | 4.003558 / 2.077655 (1.925903) | 1.802756 / 1.504120 (0.298636) | 1.604222 / 1.541195 (0.063027) | 1.656617 / 1.468490 (0.188127) | 0.523385 / 4.584777 (-4.061392) | 3.744292 / 3.745712 (-0.001420) | 1.794295 / 5.269862 (-3.475567) | 1.044690 / 4.565676 (-3.520987) | 0.064992 / 0.424275 (-0.359284) | 0.011542 / 0.007607 (0.003935) | 0.507830 / 0.226044 (0.281785) | 5.061574 / 2.268929 (2.792645) | 2.252896 / 55.444624 (-53.191729) | 1.912551 / 6.876477 (-4.963926) | 2.073510 / 2.142072 (-0.068562) | 0.642148 / 4.805227 (-4.163079) | 0.140151 / 6.500664 (-6.360513) | 0.062623 / 0.075469 (-0.012846) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180367 / 1.841788 (-0.661421) | 14.263475 / 8.074308 (6.189167) | 12.917251 / 10.191392 (2.725859) | 0.143815 / 0.680424 (-0.536608) | 0.017286 / 0.534201 (-0.516915) | 0.388411 / 0.579283 (-0.190872) | 0.430512 / 0.434364 (-0.003851) | 0.466595 / 0.540337 (-0.073742) | 0.564545 / 1.386936 (-0.822391) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006059 / 0.011353 (-0.005294) | 0.004419 / 0.011008 (-0.006590) | 0.074206 / 0.038508 (0.035697) | 0.031180 / 0.023109 (0.008071) | 0.380031 / 0.275898 (0.104133) | 0.410373 / 0.323480 (0.086893) | 0.005397 / 0.007986 (-0.002589) | 0.003952 / 0.004328 (-0.000376) | 0.074426 / 0.004250 (0.070176) | 0.046256 / 0.037052 (0.009203) | 0.385543 / 0.258489 (0.127054) | 0.430724 / 0.293841 (0.136883) | 0.028052 / 0.128546 (-0.100494) | 0.008810 / 0.075646 (-0.066836) | 0.080749 / 0.419271 (-0.338522) | 0.046746 / 0.043533 (0.003214) | 0.380325 / 0.255139 (0.125186) | 0.398901 / 0.283200 (0.115701) | 0.099607 / 0.141683 (-0.042076) | 1.433343 / 1.452155 (-0.018812) | 1.520447 / 1.492716 (0.027730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202232 / 0.018006 (0.184225) | 0.431342 / 0.000490 (0.430852) | 0.001020 / 0.000200 (0.000820) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028762 / 0.037411 (-0.008649) | 0.111777 / 0.014526 (0.097251) | 0.119283 / 0.176557 (-0.057273) | 0.168151 / 0.737135 (-0.568985) | 0.126093 / 0.296338 (-0.170245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442689 / 0.215209 (0.227480) | 4.369202 / 2.077655 (2.291547) | 2.167703 / 1.504120 (0.663583) | 1.960580 / 1.541195 (0.419385) | 2.001459 / 1.468490 (0.532969) | 0.527169 / 4.584777 (-4.057608) | 3.738987 / 3.745712 (-0.006726) | 1.819002 / 5.269862 (-3.450860) | 1.082786 / 4.565676 (-3.482891) | 0.066209 / 0.424275 (-0.358066) | 0.011549 / 0.007607 (0.003942) | 0.545959 / 0.226044 (0.319915) | 5.466655 / 2.268929 (3.197727) | 2.671448 / 55.444624 (-52.773176) | 2.340968 / 6.876477 (-4.535509) | 2.358805 / 2.142072 (0.216733) | 0.649456 / 4.805227 (-4.155771) | 0.142009 / 6.500664 (-6.358655) | 0.064199 / 0.075469 (-0.011270) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259819 / 1.841788 (-0.581969) | 14.456988 / 8.074308 (6.382680) | 14.478982 / 10.191392 (4.287590) | 0.163156 / 0.680424 (-0.517268) | 0.017090 / 0.534201 (-0.517111) | 0.391339 / 0.579283 (-0.187944) | 0.422021 / 0.434364 (-0.012343) | 0.465340 / 0.540337 (-0.074997) | 0.564517 / 1.386936 (-0.822419) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#97358c88f996a65f49923ec215358044e4146a95 \"CML watermark\")\n", "> .github/workflows/delete_doc_comment.yml \r\n\r\nis already updated https://github.com/huggingface/datasets/pull/5932/files\r\n\r\n> .github/workflows/build_pr_documentation.yml\r\n\r\nindeed no changes are needed" ]
"2023-06-07T16:09:39Z"
"2023-06-09T10:16:58Z"
"2023-06-09T09:53:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5932.diff", "html_url": "https://github.com/huggingface/datasets/pull/5932", "merged_at": "2023-06-09T09:53:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5932.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5932" }
Companion pr to https://github.com/huggingface/doc-builder/pull/379
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5932/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5932/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1882/comments
https://api.github.com/repos/huggingface/datasets/issues/1882/events
https://github.com/huggingface/datasets/pull/1882
808,716,576
MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw
1,882
Create Remote Manager
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
open
false
null
[]
null
[ "@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_file.fetch(dst_file)\r\n```\r\n\r\nI have created `RemotePath` (analogue to Path) with method `.open()` that returns `FtpFile`/`HttpFile` (analogue to file-like).\r\n\r\nNow I am going to implement `RemotePath.exists()` method (analogue to the Path's method) to check if remote resource is accessible, using `Ftp/Http.head()`.", "Quick update on this one:\r\nwe discussed offline with @albertvillanova on this PR and I think using `fsspec` can help a lot, since it already implements many parts of the abstraction we need to have nice download tools for both http and ftp (and others !)" ]
"2021-02-15T17:36:24Z"
"2022-07-06T15:19:47Z"
null
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1882.diff", "html_url": "https://github.com/huggingface/datasets/pull/1882", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1882" }
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1882/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4", "events_url": "https://api.github.com/users/wilfoderek/events{/privacy}", "followers_url": "https://api.github.com/users/wilfoderek/followers", "following_url": "https://api.github.com/users/wilfoderek/following{/other_user}", "gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wilfoderek", "id": 1625647, "login": "wilfoderek", "node_id": "MDQ6VXNlcjE2MjU2NDc=", "organizations_url": "https://api.github.com/users/wilfoderek/orgs", "received_events_url": "https://api.github.com/users/wilfoderek/received_events", "repos_url": "https://api.github.com/users/wilfoderek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions", "type": "User", "url": "https://api.github.com/users/wilfoderek" }
[]
closed
false
null
[]
null
[]
"2022-04-21T19:19:26Z"
"2022-05-03T11:29:05Z"
"2022-04-22T06:12:25Z"
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
https://api.github.com/repos/huggingface/datasets/issues/2207/events
https://github.com/huggingface/datasets/issues/2207
855,267,383
MDU6SXNzdWU4NTUyNjczODM=
2,207
making labels consistent across the datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
closed
false
null
[]
null
[ "Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n", "Hi! You can also easily reorder the label with the [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/en/process#align) method." ]
"2021-04-11T10:03:56Z"
"2022-06-01T16:23:08Z"
"2022-06-01T16:21:10Z"
NONE
null
null
null
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6/comments
https://api.github.com/repos/huggingface/datasets/issues/6/events
https://github.com/huggingface/datasets/issues/6
600,330,836
MDU6SXNzdWU2MDAzMzA4MzY=
6
Error when citation is not given in the DatasetInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[ "Yes looks good to me.\r\nNote that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think)", "No, problem ^^ It might just be a temporary fix :)", "Fixed." ]
"2020-04-15T14:14:54Z"
"2020-04-29T09:23:22Z"
"2020-04-29T09:23:22Z"
CONTRIBUTOR
null
null
null
The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__ citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) AttributeError: 'NoneType' object has no attribute 'strip' ``` I propose to do the following change in the `info.py` file. The method: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` Becomes: ```python def __repr__(self): splits_pprint = _indent("\n".join(["{"] + [ " '{}': {},".format(k, split.num_examples) for k, split in sorted(self.splits.items()) ] + ["}"])) features_pprint = _indent(repr(self.features)) ## the strip is done only is the citation is given citation_pprint = self.citation if self.citation: citation_pprint = _indent('"""{}"""'.format(self.citation.strip())) return INFO_STR.format( name=self.name, version=self.version, description=self.description, total_num_examples=self.splits.total_num_examples, features=features_pprint, splits=splits_pprint, citation=citation_pprint, homepage=self.homepage, supervised_keys=self.supervised_keys, # Proto add a \n that we strip. license=str(self.license).strip()) ``` And now it is ok. @thomwolf are you ok with this fix?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3300/comments
https://api.github.com/repos/huggingface/datasets/issues/3300/events
https://github.com/huggingface/datasets/issues/3300
1,058,644,459
I_kwDODunzps4_GaHr
3,300
❓ Dataset loading script from Hugging Face Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Hi ! In the next version of `datasets`, your train and test splits will be correctly separated (changes from #3027) if you create a dataset repository with only your CSV files.\r\n\r\nAlso it seems that you overwrite the `data_files` and `data_dir` arguments in your code, when you instantiate the AGNewsConfig objects. Those parameters are not necessary since you already know which files you want to load.\r\n\r\nYou can find an example on how to specify which file the dataset has to download in this [example script](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107):\r\n```python\r\n_URLS = {\r\n \"train\": \"train-v1.1.json\", # you can use a URL or a relative path from the python script to your file in the repository\r\n \"dev\": \"dev-v1.1.json\",\r\n}\r\n```\r\n```python\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\n ]\r\n```", "Also I think the viewer will be updated when you fix the dataset script, let me know if it doesn't", "Hi @lhoestq,\r\n\r\nThanks a lot for the super quick answer!\r\n\r\nYour suggestion solves my issue. I am now able to load the dataset properly 🚀 \r\nHowever, the dataviewer is not working yet.\r\n\r\nReally, thanks a lot for your help and consideration!\r\n\r\nBest,\r\nPietro", "Great ! We'll take a look at the viewer to fix it", "@lhoestq I think I am having a related problem.\r\nMy call to load_dataset() looks like this:\r\n\r\n```\r\n datasets = load_dataset(\r\n os.path.abspath(layoutlmft.data.datasets.xfun.__file__),\r\n f\"xfun.{data_args.lang}\",\r\n additional_langs=data_args.additional_langs,\r\n keep_in_memory=True,\r\n )\r\n\r\n```\r\n\r\nMy _split_generation code is:\r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n downloaded_file = dl_manager.download_and_extract(\"https://guillaumejaume.github.io/FUNSD/dataset.zip\")\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/training_data/\"}\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TEST, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/testing_data/\"}\r\n ),\r\n ]\r\n\r\n```\r\nHowever I get the error \"TypeError: _generate_examples() got an unexpected keyword argument 'filepath'\"\r\nThe path looks right and I see the data in the path so I think the only problem I have is that it doesn't like the key \"filepath\". However, the documentation (example [here](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107)) seems to show that this is the correct parameter. \r\n\r\nHere is the full stack trace:\r\n\r\n```\r\nDownloading and preparing dataset xfun/xfun.en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/caseygre/.cache/huggingface/datasets/xfun/xfun.en/0.0.0/96b8cb7c57f6f822f0ab37ae3be7b82d84ac57062e774c9361ccf0a4b9ef61cc...\r\nTraceback (most recent call last):\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 975, in _prepare_split\r\n generator = self._generate_examples(**split_generator.gen_kwargs)\r\nTypeError: _generate_examples() got an unexpected keyword argument 'filepath'\r\npython-BaseException\r\n```", "Hi ! The `gen_kwargs` dictionary is passed to `_generate_examples`, so in your case it must be defined this way:\r\n```python\r\ndef _generate_examples(self, filepath):\r\n ...\r\n```\r\n\r\nAnd here is an additional tip: you can use `os.path.join(downloaded_file, \"dataset/testing_data\")` instead of `f\"downloaded_file}/dataset/testing_data/\"` to get compatibility with Windows and streaming.\r\n\r\nIndeed Windows uses a backslash separator, not a slash, and streaming uses chained URLs (like `zip://dataset/testing_data::https://https://guillaumejaume.github.io/FUNSD/dataset.zip` for example)", "Thanks for you quick reply @lhoestq and so sorry for my very delayed response.\r\nWe have gotten around the error another way but I will try to duplicate this when I can. We may have had \"filepaths\" instead of \"filepath\" in our def of _generate_examples() and not noticed the difference. If I find a more useful answer for others I will add to this ticket so they know what the issue was.\r\nNote: we do have our own _generate_examples() defined with the same def as Quentin has. (But one version does have \"filepaths\".)\r\n", "Fixed in the viewer: https://huggingface.co/datasets/pietrolesci/ag_news" ]
"2021-11-19T15:20:52Z"
"2021-12-22T10:57:56Z"
"2021-12-22T10:57:56Z"
NONE
null
null
null
Hi there, I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do so I have encountered certain problems as detailed below. Issues I have encountered: - Without a loading script, the train and test files are loaded together into a unique `dataset.Dataset` -> so I wrote a loading script. Also, I need a loading script otherwise I cannot specify multiple configurations - Once my loading script is working locally, I do not manage to make it work on the hub. In particular, I would like to be able to load the dataset like this ```python load_dataset("pietrolesci/ag_news", name="my_configuration") ``` Apparently, the `load_dataset` is able to pick up the loading script from the hub and run it. However, it errors because it is unable to find the files. The structure of my hub repo is the following ``` ag_news.py train.csv test.csv ``` and the loading script I specify `data_dir=Path(__file__).parent` and `data_files=DataFilesDict({"train": "train.csv", "test": "test.csv"})`. In the documentation I could not find info regarding loading a dataset from the hub using a loading script present on the hub. Any suggestion is very much appreciated. Best, Pietro Link to the hub repo: https://huggingface.co/datasets/pietrolesci/ag_news BONUS: how can I make the data viewer work in this specific case? :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3300/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3300/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2232/comments
https://api.github.com/repos/huggingface/datasets/issues/2232/events
https://github.com/huggingface/datasets/pull/2232
860,075,931
MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4
2,232
Start filling GLUE dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I replaced all the \"we\" and applied your suggestion", "Merging this for now, we can continue improving this card in other PRs :)" ]
"2021-04-16T18:37:37Z"
"2021-04-21T09:33:09Z"
"2021-04-21T09:33:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2232.diff", "html_url": "https://github.com/huggingface/datasets/pull/2232", "merged_at": "2021-04-21T09:33:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2232.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2232" }
The dataset card was pretty much empty. I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks. cc @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2232/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3060/comments
https://api.github.com/repos/huggingface/datasets/issues/3060/events
https://github.com/huggingface/datasets/issues/3060
1,022,936,396
I_kwDODunzps48-MVM
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
{ "avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4", "events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}", "followers_url": "https://api.github.com/users/RylanSchaeffer/followers", "following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}", "gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RylanSchaeffer", "id": 8942987, "login": "RylanSchaeffer", "node_id": "MDQ6VXNlcjg5NDI5ODc=", "organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs", "received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events", "repos_url": "https://api.github.com/users/RylanSchaeffer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions", "type": "User", "url": "https://api.github.com/users/RylanSchaeffer" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.", "I close this issue for the moment. Feel free to re-open it again if the problem persists." ]
"2021-10-11T17:05:27Z"
"2021-10-28T05:52:21Z"
"2021-10-28T05:52:21Z"
NONE
null
null
null
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `dataset` variable to be properly constructed. ## Actual results ``` File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset dataset_str, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset use_auth_token=use_auth_token, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators dl_dir = dl_manager.download_and_extract(_URL) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path output_path, force_extract=download_config.force_extract File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract self.extractor.extract(input_path, output_path, extractor=extractor) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract return extractor.extract(input_path, output_path) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract tar_file.extractall(output_path) File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2052, in extract numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member self.makefile(tarinfo, targetpath) File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile copyfileobj(source, target, tarinfo.size, ReadError, bufsize) File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj buf = src.read(bufsize) File "/usr/lib/python3.6/lzma.py", line 200, in read return self._buffer.read(size) File "/usr/lib/python3.6/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/usr/lib/python3.6/_compression.py", line 99, in read raise EOFError("Compressed file ended before the " python-BaseException EOFError: Compressed file ended before the end-of-stream marker was reached ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.6.10 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3060/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1295/comments
https://api.github.com/repos/huggingface/datasets/issues/1295/events
https://github.com/huggingface/datasets/pull/1295
759,375,251
MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzE1
1,295
add hrenwac_para
{ "avatar_url": "https://avatars.githubusercontent.com/u/11391118?v=4", "events_url": "https://api.github.com/users/IvanZidov/events{/privacy}", "followers_url": "https://api.github.com/users/IvanZidov/followers", "following_url": "https://api.github.com/users/IvanZidov/following{/other_user}", "gists_url": "https://api.github.com/users/IvanZidov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/IvanZidov", "id": 11391118, "login": "IvanZidov", "node_id": "MDQ6VXNlcjExMzkxMTE4", "organizations_url": "https://api.github.com/users/IvanZidov/orgs", "received_events_url": "https://api.github.com/users/IvanZidov/received_events", "repos_url": "https://api.github.com/users/IvanZidov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/IvanZidov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IvanZidov/subscriptions", "type": "User", "url": "https://api.github.com/users/IvanZidov" }
[]
closed
false
null
[]
null
[]
"2020-12-08T11:40:06Z"
"2020-12-11T17:42:20Z"
"2020-12-11T17:42:20Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1295.diff", "html_url": "https://github.com/huggingface/datasets/pull/1295", "merged_at": "2020-12-11T17:42:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/1295.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1295" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1295/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1295/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3031/comments
https://api.github.com/repos/huggingface/datasets/issues/3031/events
https://github.com/huggingface/datasets/pull/3031
1,016,458,496
PR_kwDODunzps4ss9jn
3,031
Align tqdm control with cache control
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`" ]
"2021-10-05T15:18:49Z"
"2021-10-18T15:00:21Z"
"2021-10-18T14:59:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3031.diff", "html_url": "https://github.com/huggingface/datasets/pull/3031", "merged_at": "2021-10-18T14:59:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/3031.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3031" }
Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control API. Following the Zen of Python (😄), there should be one and preferably only one obvious way to do it, so I'm also deprecating the aforementioned `disable_progress_bar` function. Additionally, I justify the deprecation with the fact that this function has never been in the docs. Moreover, similar API changes have recently been introduced to [`tfds`](https://github.com/tensorflow/datasets/blob/a1e8b98f45b0214082b546cc967c67c43fffda55/tensorflow_datasets/core/utils/tqdm_utils.py#L98-L112). Considering the popularity of the [comment](https://github.com/huggingface/datasets/issues/1627#issuecomment-751383559) I made a while ago, this API (`set_progress_bar_enabled` and `is_progress_bar_enabled`) should be mentioned in the docs, but I'm not sure where to put it exactly. Maybe we can replace the `logging_methods` page under `package_reference` with `utility_methods` and then introduce two subsections on that page: `Logging methods` and `tqdm control`. Additionally, this PR: * adds the `disable_tqdm` keyword arg of `Dataset._map_single` to the `ignore_kwargs` list to ignore it when computing the fingerprint (forgot to add it in #2696) * deletes the unused components in `tqdm_utils.py`, which seem to be inherited from `tfds` * disables the tqdm output in the test suite. As I see it, this output doesn't seem informative, but let me know if this is not a good idea
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3031/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3031/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5258/comments
https://api.github.com/repos/huggingface/datasets/issues/5258/events
https://github.com/huggingface/datasets/issues/5258
1,453,516,636
I_kwDODunzps5Woudc
5,258
Restore order of split names in dataset_info for canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1", "TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n - Fixing PR: https://huggingface.co/datasets/chr_en/discussions/1 \r\n- [x] \"conll2000\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"crime_and_punish\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"dart\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"iwslt2017\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [ ] \"mc4\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"the_pile\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"timit_asr\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card", "The bulk edit is finished." ]
"2022-11-17T15:13:15Z"
"2023-02-16T09:49:05Z"
"2022-11-19T06:51:37Z"
MEMBER
null
null
null
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the datasets. I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script. Related to: - #5202
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5258/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1058/comments
https://api.github.com/repos/huggingface/datasets/issues/1058/events
https://github.com/huggingface/datasets/pull/1058
756,332,704
MDExOlB1bGxSZXF1ZXN0NTMxODk0Mjc0
1,058
added paws-x dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
"2020-12-03T16:06:01Z"
"2020-12-04T13:46:05Z"
"2020-12-04T13:46:05Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1058.diff", "html_url": "https://github.com/huggingface/datasets/pull/1058", "merged_at": "2020-12-04T13:46:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/1058.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1058" }
Added paws-x dataset. Updating README and tags in the dataset card in a while
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1058/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1058/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4767/comments
https://api.github.com/repos/huggingface/datasets/issues/4767/events
https://github.com/huggingface/datasets/pull/4767
1,321,843,538
PR_kwDODunzps48TCpI
4,767
Add 2.4.0 version added to docstrings
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-29T07:01:56Z"
"2022-07-29T11:16:49Z"
"2022-07-29T11:03:58Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4767.diff", "html_url": "https://github.com/huggingface/datasets/pull/4767", "merged_at": "2022-07-29T11:03:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/4767.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4767" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4767/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4767/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3608/comments
https://api.github.com/repos/huggingface/datasets/issues/3608/events
https://github.com/huggingface/datasets/issues/3608
1,109,310,981
I_kwDODunzps5CHr4F
3,608
Add support for continuous metrics (RMSE, MAE)
{ "avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4", "events_url": "https://api.github.com/users/ck37/events{/privacy}", "followers_url": "https://api.github.com/users/ck37/followers", "following_url": "https://api.github.com/users/ck37/following{/other_user}", "gists_url": "https://api.github.com/users/ck37/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ck37", "id": 50770, "login": "ck37", "node_id": "MDQ6VXNlcjUwNzcw", "organizations_url": "https://api.github.com/users/ck37/orgs", "received_events_url": "https://api.github.com/users/ck37/received_events", "repos_url": "https://api.github.com/users/ck37/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ck37/subscriptions", "type": "User", "url": "https://api.github.com/users/ck37" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html) would be helpful for the `MAE` metric.", "You can use a local metric script just by providing its path instead of the usual shortcut name ", "#self-assign I have starting working on this issue to enhance the metric API." ]
"2022-01-20T13:35:36Z"
"2022-03-09T17:18:20Z"
"2022-03-09T17:18:20Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome. **Describe the solution you'd like** I would like to be able to tag our models on the Hub with the following metrics: - RMSE - MAE **Describe alternatives you've considered** I don't know if there are any alternatives. **Additional context** Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large Thanks, Chris
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3608/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3608/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5925/comments
https://api.github.com/repos/huggingface/datasets/issues/5925/events
https://github.com/huggingface/datasets/issues/5925
1,741,941,436
I_kwDODunzps5n0-q8
5,925
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/78868366?v=4", "events_url": "https://api.github.com/users/mtkinit/events{/privacy}", "followers_url": "https://api.github.com/users/mtkinit/followers", "following_url": "https://api.github.com/users/mtkinit/following{/other_user}", "gists_url": "https://api.github.com/users/mtkinit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mtkinit", "id": 78868366, "login": "mtkinit", "node_id": "MDQ6VXNlcjc4ODY4MzY2", "organizations_url": "https://api.github.com/users/mtkinit/orgs", "received_events_url": "https://api.github.com/users/mtkinit/received_events", "repos_url": "https://api.github.com/users/mtkinit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mtkinit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mtkinit/subscriptions", "type": "User", "url": "https://api.github.com/users/mtkinit" }
[]
closed
false
null
[]
null
[]
"2023-06-05T14:46:04Z"
"2023-06-19T17:22:43Z"
"2023-06-19T17:22:43Z"
NONE
null
null
null
### Describe the bug Hi all, after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`. It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. Thanks, Martin ### Steps to reproduce the bug Here, the code crashed after we updated the `datasets` library: ```python # list_datasets no longer returns a list, which leads to an error when one tries to slice it for datasets.list_datasets(with_details=True)[:limit]: ... ``` ### Expected behavior It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. ### Environment info Ubuntu 22.04 datasets 2.12.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5925/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4", "events_url": "https://api.github.com/users/jan-pair/events{/privacy}", "followers_url": "https://api.github.com/users/jan-pair/followers", "following_url": "https://api.github.com/users/jan-pair/following{/other_user}", "gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jan-pair", "id": 118280608, "login": "jan-pair", "node_id": "U_kgDOBwzRoA", "organizations_url": "https://api.github.com/users/jan-pair/orgs", "received_events_url": "https://api.github.com/users/jan-pair/received_events", "repos_url": "https://api.github.com/users/jan-pair/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions", "type": "User", "url": "https://api.github.com/users/jan-pair" }
[]
open
false
null
[]
null
[ "Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n", "As a workaround, one can replace\r\n`return {\"hr\": torch.stack([crop_transf(tensor) for _ in range(25)])}`\r\nwith\r\n`return {f\"hr_crop_{i}\": crop_transf(tensor) for i in range(25)}`\r\nand then choose appropriate crop randomly in further processing, but I still don't understand why the original approach doesn't work(\r\n" ]
"2023-03-21T09:33:27Z"
"2023-03-21T10:32:07Z"
null
NONE
null
null
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2430/comments
https://api.github.com/repos/huggingface/datasets/issues/2430/events
https://github.com/huggingface/datasets/pull/2430
907,322,595
MDExOlB1bGxSZXF1ZXN0NjU4MTg3Njkw
2,430
Add version-specific BibTeX
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Maybe we should only keep one citation ?\r\ncc @thomwolf @yjernite ", "For info:\r\n- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.\r\n- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.\r\n- All the information automatically generated by Zenodo can be corrected/customized if necessary.\r\n - If we decide to correct/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...\r\n\r\nAccording to Zenodo: https://help.zenodo.org/#versioning\r\n> **Which DOI should I use in citations?**\r\n> \r\n> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.\r\n> \r\n> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version.", "Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation.", "I have removed the old citation.\r\n\r\nWhat about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https://github.com/huggingface/datasets/graphs/contributors\r\nI do not know if this default automatic list of authors is what we want to show in the citation..." ]
"2021-05-31T10:05:42Z"
"2021-06-08T07:53:22Z"
"2021-06-08T07:53:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2430.diff", "html_url": "https://github.com/huggingface/datasets/pull/2430", "merged_at": "2021-06-08T07:53:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/2430.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2430" }
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release. This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project. See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2430/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2430/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3107/comments
https://api.github.com/repos/huggingface/datasets/issues/3107/events
https://github.com/huggingface/datasets/pull/3107
1,030,357,527
PR_kwDODunzps4tYyhF
3,107
Add paper BibTeX citation
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-10-19T14:08:11Z"
"2021-10-19T14:26:22Z"
"2021-10-19T14:26:21Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3107.diff", "html_url": "https://github.com/huggingface/datasets/pull/3107", "merged_at": "2021-10-19T14:26:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/3107.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3107" }
Add paper BibTeX citation to README file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3107/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4479/comments
https://api.github.com/repos/huggingface/datasets/issues/4479/events
https://github.com/huggingface/datasets/pull/4479
1,268,558,237
PR_kwDODunzps45hHtZ
4,479
Include entity positions as feature in ReCoRD
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the reply @lhoestq !\r\n\r\nI have sucessed on `datasets-cli test ./datasets/super_glue --name record --save_infos`,\r\nBut as you can see, the check ran into `FAILED tests/test_dataset_cards.py::test_changed_dataset_card[super_glue] - V...`.\r\nHow can we solve it?", "That would be neat! Let me implement it." ]
"2022-06-12T11:56:28Z"
"2022-08-19T23:23:02Z"
"2022-08-19T13:23:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4479.diff", "html_url": "https://github.com/huggingface/datasets/pull/4479", "merged_at": "2022-08-19T13:23:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4479" }
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually. But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in the passage, and thus makes one training instance from one data point. It is way more efficient and proved effective for the ReCoRD task. Can anybody help me with the dataset card rendering error? Maybe @lhoestq ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4479/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2628/comments
https://api.github.com/repos/huggingface/datasets/issues/2628/events
https://github.com/huggingface/datasets/pull/2628
941,676,404
MDExOlB1bGxSZXF1ZXN0Njg3NTE0NzQz
2,628
Use ETag of remote data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[]
"2021-07-12T05:10:10Z"
"2021-07-12T14:08:34Z"
"2021-07-12T08:40:07Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2628.diff", "html_url": "https://github.com/huggingface/datasets/pull/2628", "merged_at": "2021-07-12T08:40:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/2628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2628" }
Use ETag of remote data files to create config ID. Related to #2616.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2628/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/564/comments
https://api.github.com/repos/huggingface/datasets/issues/564/events
https://github.com/huggingface/datasets/pull/564
691,000,020
MDExOlB1bGxSZXF1ZXN0NDc3ODAyMTk2
564
Wait for writing in distributed metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even started in which case it would mix results from separate operations.\r\n\r\nI feel like the most robust way to solve this is to setup a rendez-vous on the first time we write on files and where each process will test and only finish its operation when it cannot acquire a lock on all the other processes (meaning they all have started).\r\n\r\nWhat do you think?", "What do you think of this @thomwolf ? I check all the locks before finalizing", "Ok on my side @lhoestq (cannot add you as a reviewer)", "The test doesn't pass if I add:\r\n```python\r\n import time\r\n if self.process_id == 1:\r\n time.sleep(0.5)\r\n```\r\nright before `self.add_batch` in `Metric.compute`.\r\n\r\nI'm investigating why it doesn't work in that case", "It looks like the process 1 runs `_check_all_processes_locks` correctly and then finishes and releases its lock before process 0 even managed to to run `_check_all_processes_locks` correctly.", "Strange!", "I changed the way the rendez-vous is done @thomwolf , let me know what you think.\r\nThe idea is that the master process has an additional lock `rendez_vous_lock` to tell every other process to wait for everyone to be ready before starting to write" ]
"2020-09-02T12:58:50Z"
"2020-09-09T09:13:23Z"
"2020-09-09T09:13:22Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/564.diff", "html_url": "https://github.com/huggingface/datasets/pull/564", "merged_at": "2020-09-09T09:13:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/564.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/564" }
There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing. To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/564/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/564/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5056/comments
https://api.github.com/repos/huggingface/datasets/issues/5056/events
https://github.com/huggingface/datasets/pull/5056
1,394,713,173
PR_kwDODunzps5ADfxN
5,056
Fix broken URL's (GEM)
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.", "Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub." ]
"2022-10-03T13:13:22Z"
"2022-10-04T13:49:00Z"
"2022-10-04T13:48:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5056.diff", "html_url": "https://github.com/huggingface/datasets/pull/5056", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5056.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5056" }
This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5056/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5056/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3814/comments
https://api.github.com/repos/huggingface/datasets/issues/3814/events
https://github.com/huggingface/datasets/pull/3814
1,158,518,995
PR_kwDODunzps4z5Zk4
3,814
Handle Nones in PyArrow struct
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "Looks like I added my comments while you were editing - sorry about that" ]
"2022-03-03T15:03:35Z"
"2022-03-03T16:37:44Z"
"2022-03-03T16:37:43Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3814.diff", "html_url": "https://github.com/huggingface/datasets/pull/3814", "merged_at": "2022-03-03T16:37:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3814" }
This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3814/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3814/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4291/comments
https://api.github.com/repos/huggingface/datasets/issues/4291/events
https://github.com/huggingface/datasets/issues/4291
1,227,777,500
I_kwDODunzps5JLmXc
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.", "Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)" ]
"2022-05-06T12:03:27Z"
"2022-05-09T08:25:58Z"
"2022-05-09T08:25:58Z"
CONTRIBUTOR
null
null
null
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4291/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1722/comments
https://api.github.com/repos/huggingface/datasets/issues/1722/events
https://github.com/huggingface/datasets/pull/1722
783,921,679
MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4
1,722
Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "events_url": "https://api.github.com/users/mounicam/events{/privacy}", "followers_url": "https://api.github.com/users/mounicam/followers", "following_url": "https://api.github.com/users/mounicam/following{/other_user}", "gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mounicam", "id": 11708999, "login": "mounicam", "node_id": "MDQ6VXNlcjExNzA4OTk5", "organizations_url": "https://api.github.com/users/mounicam/orgs", "received_events_url": "https://api.github.com/users/mounicam/received_events", "repos_url": "https://api.github.com/users/mounicam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mounicam/subscriptions", "type": "User", "url": "https://api.github.com/users/mounicam" }
[]
closed
false
null
[]
null
[ "The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants." ]
"2021-01-12T05:26:04Z"
"2021-01-12T18:14:53Z"
"2021-01-12T17:35:57Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1722.diff", "html_url": "https://github.com/huggingface/datasets/pull/1722", "merged_at": "2021-01-12T17:35:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1722.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1722" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1722/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/998/comments
https://api.github.com/repos/huggingface/datasets/issues/998/events
https://github.com/huggingface/datasets/pull/998
755,235,356
MDExOlB1bGxSZXF1ZXN0NTMwOTg2MTQ3
998
adding yahoo_answers_qa
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[]
"2020-12-02T12:33:54Z"
"2020-12-02T13:45:40Z"
"2020-12-02T13:26:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/998.diff", "html_url": "https://github.com/huggingface/datasets/pull/998", "merged_at": "2020-12-02T13:26:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/998.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/998" }
Adding Yahoo Answers QA dataset. More info: https://ciir.cs.umass.edu/downloads/nfL6/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/998/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/998/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4715/comments
https://api.github.com/repos/huggingface/datasets/issues/4715/events
https://github.com/huggingface/datasets/pull/4715
1,309,405,980
PR_kwDODunzps47pSui
4,715
Fix POS tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI failures are about missing content in the dataset cards or bad tags, and this is unrelated to this PR. Merging :)" ]
"2022-07-19T11:52:54Z"
"2022-07-19T12:54:34Z"
"2022-07-19T12:41:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4715.diff", "html_url": "https://github.com/huggingface/datasets/pull/4715", "merged_at": "2022-07-19T12:41:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4715" }
We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https://github.com/huggingface/datasets/commit/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4715/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1154/comments
https://api.github.com/repos/huggingface/datasets/issues/1154/events
https://github.com/huggingface/datasets/pull/1154
757,651,669
MDExOlB1bGxSZXF1ZXN0NTMyOTk2MDQ3
1,154
Opus sardware
{ "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/spatil6", "id": 6419011, "login": "spatil6", "node_id": "MDQ6VXNlcjY0MTkwMTE=", "organizations_url": "https://api.github.com/users/spatil6/orgs", "received_events_url": "https://api.github.com/users/spatil6/received_events", "repos_url": "https://api.github.com/users/spatil6/repos", "site_admin": false, "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "type": "User", "url": "https://api.github.com/users/spatil6" }
[]
closed
false
null
[]
null
[]
"2020-12-05T10:38:02Z"
"2020-12-05T17:05:45Z"
"2020-12-05T17:05:45Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1154.diff", "html_url": "https://github.com/huggingface/datasets/pull/1154", "merged_at": "2020-12-05T17:05:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1154.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1154" }
Added Opus sardware dataset for machine translation English to Sardinian. for more info : http://opus.nlpl.eu/sardware.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1154/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1154/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/701/comments
https://api.github.com/repos/huggingface/datasets/issues/701/events
https://github.com/huggingface/datasets/pull/701
713,485,757
MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1
701
Add rouge 2 and rouge Lsum to rouge metric outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Oups too late, sorry" ]
"2020-10-02T09:35:46Z"
"2020-10-02T09:55:14Z"
"2020-10-02T09:52:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/701.diff", "html_url": "https://github.com/huggingface/datasets/pull/701", "merged_at": "2020-10-02T09:52:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/701" }
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/701/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4085/comments
https://api.github.com/repos/huggingface/datasets/issues/4085/events
https://github.com/huggingface/datasets/issues/4085
1,190,621,345
I_kwDODunzps5G93Ch
4,085
datasets.set_progress_bar_enabled(False) not working in datasets v2
{ "avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4", "events_url": "https://api.github.com/users/virilo/events{/privacy}", "followers_url": "https://api.github.com/users/virilo/followers", "following_url": "https://api.github.com/users/virilo/following{/other_user}", "gists_url": "https://api.github.com/users/virilo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/virilo", "id": 3381112, "login": "virilo", "node_id": "MDQ6VXNlcjMzODExMTI=", "organizations_url": "https://api.github.com/users/virilo/orgs", "received_events_url": "https://api.github.com/users/virilo/received_events", "repos_url": "https://api.github.com/users/virilo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/virilo/subscriptions", "type": "User", "url": "https://api.github.com/users/virilo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted", "Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co/docs/datasets/package_reference/logging_methods)", "One important thing for beginner like me is: from datasets.utils.logging import disable_progress_bar\r\nDo not forget the 'utils' or you will waste a long time like me...." ]
"2022-04-02T12:40:10Z"
"2022-09-17T02:18:03Z"
"2022-04-04T06:44:34Z"
NONE
null
null
null
## Describe the bug datasets.set_progress_bar_enabled(False) not working in datasets v2 ## Steps to reproduce the bug ```python datasets.set_progress_bar_enabled(False) ``` ## Expected results datasets not using any progress bar ## Actual results AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled ## Environment info datasets version 2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4085/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1793/comments
https://api.github.com/repos/huggingface/datasets/issues/1793/events
https://github.com/huggingface/datasets/pull/1793
796,940,299
MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0
1,793
Minor fix the docstring of load_metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-01-29T14:47:35Z"
"2021-01-29T16:53:32Z"
"2021-01-29T16:53:32Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1793.diff", "html_url": "https://github.com/huggingface/datasets/pull/1793", "merged_at": "2021-01-29T16:53:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1793.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1793" }
Minor fix: - duplicated attributes - format fix
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1793/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1793/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5380/comments
https://api.github.com/repos/huggingface/datasets/issues/5380/events
https://github.com/huggingface/datasets/issues/5380
1,504,404,043
I_kwDODunzps5Zq2JL
5,380
Improve dataset `.skip()` speed in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "events_url": "https://api.github.com/users/versae/events{/privacy}", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/versae", "id": 173537, "login": "versae", "node_id": "MDQ6VXNlcjE3MzUzNw==", "organizations_url": "https://api.github.com/users/versae/orgs", "received_events_url": "https://api.github.com/users/versae/received_events", "repos_url": "https://api.github.com/users/versae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "type": "User", "url": "https://api.github.com/users/versae" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
null
[]
null
[ "Hi! I agree `skip` can be inefficient to use in the current state.\r\n\r\nTo make it fast, we could use \"statistics\" stored in Parquet metadata and read only the chunks needed to form a dataset. \r\n\r\nAnd thanks to the \"datasets-server\" project, which aims to store the Parquet versions of the Hub datasets (only the smaller datasets are covered currently), this solution can also be applied to datasets stored in formats other than Parquet. (cc @severo)", "@mariosasko do the current parquet files created by the datasets-server already have the required \"statistics\"? If not, please open an issue on https://github.com/huggingface/datasets-server with some details to make sure we implement it.", "Yes, nothing has to be changed on the datasets-server side. What I mean by \"statistics\" is that we can use the \"row_group\" metadata embedded in a Parquet file (by default) to fetch the requested rows more efficiently.", "Glad to see the feature could be of interest. \r\n\r\nI'm sure there are many possible ways to implement this feature. I don't know enough about the datasets-server, but I guess that it is not instantaneous, in the sense that user-owned private datasets might need hours or days until they are ported to the datasets-server (if at all), which could be cumbersome. Having optionally that information in the `dataset_infos.json` file would make it easier for users to control the skip process a bit.", "re: statistics:\r\n\r\n- https://arrow.apache.org/docs/python/generated/pyarrow.parquet.FileMetaData.html\r\n- https://arrow.apache.org/docs/python/generated/pyarrow.parquet.RowGroupMetaData.html\r\n\r\n```python\r\n>>> import pyarrow.parquet as pq\r\n>>> import hffs\r\n>>> fs = hffs.HfFileSystem(\"glue\", repo_type=\"dataset\", revision=\"refs/convert/parquet\")\r\n>>> metadata = pq.read_metadata(\"ax/glue-test.parquet\", filesystem=fs)\r\n>>> metadata\r\n<pyarrow._parquet.FileMetaData object at 0x7f4537cec400>\r\n created_by: parquet-cpp-arrow version 7.0.0\r\n num_columns: 4\r\n num_rows: 1104\r\n num_row_groups: 2\r\n format_version: 1.0\r\n serialized_size: 2902\r\n>>> metadata.row_group(0)\r\n<pyarrow._parquet.RowGroupMetaData object at 0x7f45564bcbd0>\r\n num_columns: 4\r\n num_rows: 1000\r\n total_byte_size: 164474\r\n>>> metadata.row_group(1)\r\n<pyarrow._parquet.RowGroupMetaData object at 0x7f455005c400>\r\n num_columns: 4\r\n num_rows: 104\r\n total_byte_size: 13064\r\n```", "> user-owned private datasets might need hours or days until they are ported to the datasets-server (if at all)\r\n\r\nprivate datasets are not supported yet (https://github.com/huggingface/datasets-server/issues/39)", "@versae `Dataset.push_to_hub` writes shards in Parquet, so this solution would also work for such datasets (immediately after the push). ", "@mariosasko that is right. However, there are still a good amount of datasets for which the shards are created manually. In our very specific case, we create medium-sized datasets (rarely over 100-200GB) of both text and audio, we prepare the shards by hand and then upload then. It would be great to have immediate access to this download skipping feature for them too.", "From looking at Arrow's source, it seems Parquet stores metadata at the end, which means one needs to iterate over a Parquet file's data before accessing its metadata. We could mimic Dask to address this \"limitation\" and write metadata in a `_metadata`/`_common_metadata` file in `to_parquet`/`push_to_hub`, which we could then use to optimize reads (if present). Plus, it's handy that PyArrow can also parse these metadata files.", "So if Parquet metadata needs to be in its own file anyway, why not implement this skipping feature by storing the example counts per shard in `dataset_infos.json`? That would allow:\r\n- Support both private and public datasets\r\n- Immediate access to the feature upon uploading of shards\r\n- Use any dataset, not only those uploaded using `.push_to_hub()`\r\n\r\nA proper Parquet metadata file could still be created and \"overwrite\" the `dataset_infos.json` info in the datasets-server." ]
"2022-12-20T11:25:23Z"
"2023-03-08T10:47:12Z"
null
CONTRIBUTOR
null
null
null
### Feature request Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT it should speed up the skipping process. ### Motivation When resuming from a checkpoint after a crashed run, using `dataset.skip()` is very convenient to recover the exact state of the data and to not train again over the same examples (assuming same seed, no shuffling). However, I have noticed that for audio datasets in streaming mode this is very costly in terms of time, as shards need to be downloaded every time before skipping the right number of examples. ### Your contribution I took a look already at the code, but it seems a change like this is way deeper than I am able to manage, as it touches the library in several parts. I could give it a try but might need some guidance on the internals.
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/5380/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5380/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1629/comments
https://api.github.com/repos/huggingface/datasets/issues/1629/events
https://github.com/huggingface/datasets/pull/1629
774,255,716
MDExOlB1bGxSZXF1ZXN0NTQ1MjAwNTQ3
1,629
add wongnai_reviews test set labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[]
"2020-12-24T08:02:31Z"
"2020-12-28T17:23:39Z"
"2020-12-28T17:23:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1629.diff", "html_url": "https://github.com/huggingface/datasets/pull/1629", "merged_at": "2020-12-28T17:23:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1629.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1629" }
- add test set labels provided by @ekapolc - refactor `star_rating` to a `datasets.features.ClassLabel` field
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1629/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
https://api.github.com/repos/huggingface/datasets/issues/3473/events
https://github.com/huggingface/datasets/issues/3473
1,086,937,610
I_kwDODunzps5AyVoK
3,473
Iterating over a vision dataset doesn't decode the images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
closed
false
null
[]
null
[ "As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.", "> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.", "@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================", "Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).", "> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n", "Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)", "For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.", "Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.", "Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?" ]
"2021-12-22T15:26:32Z"
"2021-12-27T14:13:21Z"
"2021-12-23T15:21:57Z"
MEMBER
null
null
null
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/291/comments
https://api.github.com/repos/huggingface/datasets/issues/291/events
https://github.com/huggingface/datasets/pull/291
642,688,450
MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy
291
break statement not required
{ "avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4", "events_url": "https://api.github.com/users/mayurnewase/events{/privacy}", "followers_url": "https://api.github.com/users/mayurnewase/followers", "following_url": "https://api.github.com/users/mayurnewase/following{/other_user}", "gists_url": "https://api.github.com/users/mayurnewase/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayurnewase", "id": 12967587, "login": "mayurnewase", "node_id": "MDQ6VXNlcjEyOTY3NTg3", "organizations_url": "https://api.github.com/users/mayurnewase/orgs", "received_events_url": "https://api.github.com/users/mayurnewase/received_events", "repos_url": "https://api.github.com/users/mayurnewase/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayurnewase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayurnewase/subscriptions", "type": "User", "url": "https://api.github.com/users/mayurnewase" }
[]
closed
false
null
[]
null
[ "I guess,test failing due to connection error?", "We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?", "If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r\nI guess we can have one return in the for loop instead of the break statement, AND one return at the end to explicitly return None.\r\nWhat do you think ?" ]
"2020-06-22T01:40:55Z"
"2020-06-23T17:57:58Z"
"2020-06-23T09:37:02Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/291.diff", "html_url": "https://github.com/huggingface/datasets/pull/291", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/291.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/291" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/291/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3403/comments
https://api.github.com/repos/huggingface/datasets/issues/3403/events
https://github.com/huggingface/datasets/issues/3403
1,073,622,120
I_kwDODunzps4__ixo
3,403
Cannot import name 'maybe_sync'
{ "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KMFODA", "id": 35491698, "login": "KMFODA", "node_id": "MDQ6VXNlcjM1NDkxNjk4", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "repos_url": "https://api.github.com/users/KMFODA/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "type": "User", "url": "https://api.github.com/users/KMFODA" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! Can you try updating `fsspec` ? The minimum version is `2021.05.0`", "hey @lhoestq. I'm using `fsspec-2021.11.1` but still getting that error.", "Maybe this discussion can help:\r\n\r\nhttps://github.com/fsspec/filesystem_spec/issues/597#issuecomment-958646964", "Thanks @lhoestq. Downgrading `fsspec and s3fs` to `2021.10` fixed this issue!" ]
"2021-12-07T17:57:59Z"
"2021-12-17T07:00:35Z"
"2021-12-17T07:00:35Z"
CONTRIBUTOR
null
null
null
## Describe the bug Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results No error ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import ( File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module> from ..utils.streaming_download_manager import xopen File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module> from .s3filesystem import S3FileSystem # noqa: F401 File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module> import s3fs File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module> from .core import S3FileSystem, S3File File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module> from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.0 - Platform: OVH Cloud Tesla V100 Machine - Python version: 3.7.9 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3403/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3403/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/859/comments
https://api.github.com/repos/huggingface/datasets/issues/859/events
https://github.com/huggingface/datasets/pull/859
743,917,091
MDExOlB1bGxSZXF1ZXN0NTIxNzI4MDM4
859
Integrate file_lock inside the lib for better logging control
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-11-16T15:13:39Z"
"2020-11-16T17:06:44Z"
"2020-11-16T17:06:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/859.diff", "html_url": "https://github.com/huggingface/datasets/pull/859", "merged_at": "2020-11-16T17:06:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/859.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/859" }
Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors. For example ```python import logging logging.basicConfig(level=logging.INFO) import datasets datasets.set_verbosity_warning() datasets.load_dataset("squad") ``` would still log the file lock events: ``` INFO:filelock:Lock 5737989232 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 5737989232 released on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 4393489968 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393489968 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393490808 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) INFO:filelock:Lock 4393490808 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock ``` With the integration of file_lock in the library, the ouput is much cleaner: ``` Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) ``` Since the file_lock package is only a 450 lines file I think it's fine to have it inside the lib. Fix #812
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/859/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2990/comments
https://api.github.com/repos/huggingface/datasets/issues/2990/events
https://github.com/huggingface/datasets/pull/2990
1,012,097,418
PR_kwDODunzps4sgLt5
2,990
Make Dataset.map accept list of np.array
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-09-30T12:08:54Z"
"2021-10-01T13:57:46Z"
"2021-10-01T13:57:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2990.diff", "html_url": "https://github.com/huggingface/datasets/pull/2990", "merged_at": "2021-10-01T13:57:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/2990.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2990" }
Fix #2987.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2990/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4208/comments
https://api.github.com/repos/huggingface/datasets/issues/4208/events
https://github.com/huggingface/datasets/pull/4208
1,213,716,426
PR_kwDODunzps42r7bW
4,208
Add CMU MoCap Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "- Updated the readme.\r\n- Added dummy_data.zip and ran the all the tests.\r\n\r\nThe dataset works for \"asf/amc\" and \"avi\" formats which have a single download link for the complete dataset. But \"c3d\" and \"mpg\" have multiple download links, can we combine and host these links on the Hub since the dataset is free to use ?", "\"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\nCan we combine and host these links on the Hub since the dataset is free to use ?", "> \"c3d\" and \"mpg\" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ?\r\n\r\nWe store downloaded data under `~/.cache/huggingface/datasets/downloads` (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".", "> We store downloaded data under ~/.cache/huggingface/datasets/downloads (by default), so these downloads are \"hidden\" and won't clutter one's filesystem in an \"obvious way\".\r\n\r\nYes, the filesystem won't be clustered, but the problem is processing the dataset becomes cumbersome. For eg, for the c3d format has 5 part-downloads, so the folders will be as follows : \r\n```\r\n['~/.cache/huggingface/datasets/downloads/extracted/0e6bf028f490bf18c23ce572d1437c4ef32a74f630e33c26a806250d35cfcdd1', '~/.cache/huggingface/datasets/downloads/extracted/1b44fc5c7a6e031c904545422d449fd964f8ee795b9d1dcb0b6a76d03b50ebe6', '~/.cache/huggingface/datasets/downloads/extracted/137595188e96187c24ce1aa5c78200c7f78816fbd9d6c62354c01b3e6ec550c7', '~/.cache/huggingface/datasets/downloads/extracted/6c0c893e435f36fd79aa0f199f58fe16f01985f039644a7cb094a8c43a15ffd4', '~/.cache/huggingface/datasets/downloads/extracted/45e4703354cbc975e6add66f1b17b716c882b56f44575b033c5926aa5fcfb17f']\r\n```\r\nEach of these folders have a given set of subjects, so we'll be need to write extra code to fetch data from each of these folders, and the mpg format has 12 part-downloads which will lead to 12 folders having certain set of subjects, so it is cumbersome to process them.", "I have added all the changes that were suggested. We just need to handle the multi-part download for c3d and mpg formats. Easiest way would be to have just one zip for these formats.", "But we can handle this with a simple mapping that stores the id ranges (for each config), no? And an actual file path is not important during processing.", "I have added code to handle c3d, mpg formats as well. The data for the mpg format seems incomplete as it contains only 53 rows. I have added a note regarding this in the Data Splits section.", "The real data test works fine and dummy_data test work fine. There were few missing files which was causing issues, I have fixed it now.\r\n", "- Reduced the dummy_data size.\r\n- Added sample dataset preprocessing code, it is not complete though.\r\n- Added all changes suggested.\r\n\r\nLet me know if anything else is required. Thank you. :)", "Thanks for your contribution, @dnaveenr.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
"2022-04-24T17:31:08Z"
"2022-10-03T09:38:24Z"
"2022-10-03T09:36:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4208.diff", "html_url": "https://github.com/huggingface/datasets/pull/4208", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4208" }
Resolves #3457 Dataset Request : Add CMU Graphics Lab Motion Capture dataset [#3457](https://github.com/huggingface/datasets/issues/3457) This PR adds the CMU MoCap Dataset. The authors didn't respond even after multiple follow ups, so I ended up crawling the website to get categories, subcategories and description information. Some of the subjects do not have category/subcategory/description as well. I am using a subject to categories, subcategories and description map (metadata file). Currently the loading of the dataset works for "asf/amc" and "avi" formats since they have a single download link. But "c3d" and "mpg" have multiple download links (part archives) and dl_manager.download_and_extract() extracts the files to multiple paths, is there a way to extract these multiple archives into one folder ? Any other way to go about this ? Any suggestions/inputs on this would be helpful. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4208/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1049/comments
https://api.github.com/repos/huggingface/datasets/issues/1049/events
https://github.com/huggingface/datasets/pull/1049
756,157,602
MDExOlB1bGxSZXF1ZXN0NTMxNzQ3NDY0
1,049
Add siswati ner corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yvonnegitau", "id": 7923902, "login": "yvonnegitau", "node_id": "MDQ6VXNlcjc5MjM5MDI=", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "type": "User", "url": "https://api.github.com/users/yvonnegitau" }
[]
closed
false
null
[]
null
[]
"2020-12-03T12:36:00Z"
"2020-12-03T17:27:02Z"
"2020-12-03T17:26:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1049.diff", "html_url": "https://github.com/huggingface/datasets/pull/1049", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1049.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1049" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1049/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1049/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3202/comments
https://api.github.com/repos/huggingface/datasets/issues/3202/events
https://github.com/huggingface/datasets/issues/3202
1,043,213,660
I_kwDODunzps4-Li1c
3,202
Add mIoU metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Resolved via https://github.com/huggingface/datasets/pull/3745." ]
"2021-11-03T08:42:32Z"
"2022-06-01T17:39:05Z"
"2022-06-01T17:39:04Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset. An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3202/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3202/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5595/comments
https://api.github.com/repos/huggingface/datasets/issues/5595/events
https://github.com/huggingface/datasets/pull/5595
1,604,070,629
PR_kwDODunzps5K--V9
5,595
Unpins sqlAlchemy
{ "avatar_url": "https://avatars.githubusercontent.com/u/46943923?v=4", "events_url": "https://api.github.com/users/lazarust/events{/privacy}", "followers_url": "https://api.github.com/users/lazarust/followers", "following_url": "https://api.github.com/users/lazarust/following{/other_user}", "gists_url": "https://api.github.com/users/lazarust/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lazarust", "id": 46943923, "login": "lazarust", "node_id": "MDQ6VXNlcjQ2OTQzOTIz", "organizations_url": "https://api.github.com/users/lazarust/orgs", "received_events_url": "https://api.github.com/users/lazarust/received_events", "repos_url": "https://api.github.com/users/lazarust/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lazarust/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazarust/subscriptions", "type": "User", "url": "https://api.github.com/users/lazarust" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5595). All of your documentation changes will be reflected on that endpoint.", "It looks like this issue hasn't been fixed yet, so let's wait a bit more.", "@lazarust thanks for your work, but unfortunately we cannot merge it.\r\n\r\nSee my comment in: https://github.com/huggingface/datasets/issues/5477#issuecomment-1495512688\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-2.0.0`:\r\n- https://github.com/pandas-dev/pandas/releases/tag/v2.0.0\r\n\r\nbut it will not be back-ported to `pandas-1`:\r\n- https://github.com/pandas-dev/pandas/pull/48576#issuecomment-1466467159\r\n\r\nAlso note that `pandas-2.0.0` dropped support for Python 3.7:\r\n- https://github.com/pandas-dev/pandas/issues/41678\r\n- https://github.com/pandas-dev/pandas/pull/41989\r\n\r\nTherefore, we cannot unpin `sqlalchemy` until we drop support for Python 3.7 (these Python users cannot use `pandas-2`). See our latest CI checks below:\r\n- \"CI / test\" fails because it runs on Python 3.7\r\n- \"CI / test_py310\" succeeds because it runs on Python 3.10 " ]
"2023-03-01T01:33:45Z"
"2023-04-04T08:20:19Z"
"2023-04-04T08:19:14Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5595.diff", "html_url": "https://github.com/huggingface/datasets/pull/5595", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5595.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5595" }
Closes #5477
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5595/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5595/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3882/comments
https://api.github.com/repos/huggingface/datasets/issues/3882/events
https://github.com/huggingface/datasets/pull/3882
1,164,595,388
PR_kwDODunzps40NKz7
3,882
Image process doc
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3882). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T00:32:10Z"
"2022-03-15T15:24:16Z"
"2022-03-15T15:24:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3882.diff", "html_url": "https://github.com/huggingface/datasets/pull/3882", "merged_at": "2022-03-15T15:24:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/3882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3882" }
This PR is a first draft of how to process image data. It adds: - Load an image dataset with `image` and `path` (adds tip about `decode=False` param to access the path and bytes, thanks to @mariosasko). - Load an image using the `ImageFolder` builder. I know there is an [example](https://huggingface.co/docs/datasets/master/en/loading#image-folders) of this already, but I also wanted to add it here so users don't miss it. This doc seems important for centralizing all of the image-related things so far. Datasets has grown so quickly 🚀 now that I think maybe splitting up the How-to guides by modality may be better since working with vision/audio data is slightly different from what users have seen up until now. This way we can continue to scale the docs to better accommodate vision/audio things. - Add a data augmentation with `set_transform`. There is only 1 example here so far, but we can certainly add more. Todo: - [x] Couldn't figure out why my augmentation function works with `set_transform` but not `map` 🥲. Working with @mariosasko on this!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/3882/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3882/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4877/comments
https://api.github.com/repos/huggingface/datasets/issues/4877/events
https://github.com/huggingface/datasets/pull/4877
1,348,246,755
PR_kwDODunzps49qF-w
4,877
Fix documentation card of covid_qa_castorini dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint." ]
"2022-08-23T16:52:33Z"
"2022-08-23T18:05:01Z"
"2022-08-23T18:05:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4877.diff", "html_url": "https://github.com/huggingface/datasets/pull/4877", "merged_at": "2022-08-23T18:05:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4877.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4877" }
Fix documentation card of covid_qa_castorini dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4877/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/530/comments
https://api.github.com/repos/huggingface/datasets/issues/530/events
https://github.com/huggingface/datasets/pull/530
684,825,612
MDExOlB1bGxSZXF1ZXN0NDcyNjQ5NTk2
530
use ragged tensor by default
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release", "I am running into the same issue with the error message on my local windows machine -\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'. Tensorflow version is 2.6. Anything that I can do to fix it?\r\ntrain_features = {x: tf_train_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\ntrain_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\ntrain_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n\r\neval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\neval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset[\"label\"]))\r\neval_tf_dataset = eval_tf_dataset.batch(8)\r\n\r\nttributeError Traceback (most recent call last)\r\n<ipython-input-59-f50e45c2c0dc> in <module>\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n<ipython-input-59-f50e45c2c0dc> in <dictcomp>(.0)\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n~\\AppData\\Roaming\\Python\\Python38\\site-packages\\tensorflow\\python\\framework\\ops.py in __getattr__(self, name)\r\n 399 from tensorflow.python.ops.numpy_ops import np_config\r\n 400 np_config.enable_numpy_behavior()\"\"\".format(type(self).__name__, name))\r\n--> 401 self.__getattribute__(name)\r\n 402 \r\n 403 @staticmethod\r\n\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'convert_to_tensor'\r\n\r\n", "Hi ! Before calling `to_tensor`, make sure that your object is a RaggedTensor, because it may already be a regular Tensor if the shapes of your examples are all the same", "Okay. i am not familiar with how to check the difference between the two. I will research on this." ]
"2020-08-24T17:06:15Z"
"2021-10-22T19:38:40Z"
"2020-08-24T19:22:25Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/530.diff", "html_url": "https://github.com/huggingface/datasets/pull/530", "merged_at": "2020-08-24T19:22:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/530.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/530" }
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow. Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not. Therefore I reverted this behavior to always return a ragged tensor as we used to do.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/530/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3067/comments
https://api.github.com/repos/huggingface/datasets/issues/3067/events
https://github.com/huggingface/datasets/pull/3067
1,024,023,185
PR_kwDODunzps4tFSCy
3,067
add story_cloze
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
[]
closed
false
null
[]
null
[ "Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```", "@lhoestq can't fix the last test fails.", "> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ", "Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation." ]
"2021-10-12T16:36:53Z"
"2021-10-13T13:48:13Z"
"2021-10-13T13:48:13Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3067.diff", "html_url": "https://github.com/huggingface/datasets/pull/3067", "merged_at": "2021-10-13T13:48:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3067.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3067" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3067/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008473 / 0.011353 (-0.002880) | 0.004734 / 0.011008 (-0.006274) | 0.103895 / 0.038508 (0.065387) | 0.071838 / 0.023109 (0.048729) | 0.379949 / 0.275898 (0.104051) | 0.397375 / 0.323480 (0.073895) | 0.006695 / 0.007986 (-0.001290) | 0.004536 / 0.004328 (0.000207) | 0.076151 / 0.004250 (0.071901) | 0.058690 / 0.037052 (0.021638) | 0.379937 / 0.258489 (0.121448) | 0.411833 / 0.293841 (0.117992) | 0.046805 / 0.128546 (-0.081741) | 0.013689 / 0.075646 (-0.061958) | 0.327896 / 0.419271 (-0.091375) | 0.063873 / 0.043533 (0.020340) | 0.378451 / 0.255139 (0.123312) | 0.398725 / 0.283200 (0.115525) | 0.034961 / 0.141683 (-0.106722) | 1.604999 / 1.452155 (0.152845) | 1.748370 / 1.492716 (0.255654) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224634 / 0.018006 (0.206628) | 0.548468 / 0.000490 (0.547979) | 0.005049 / 0.000200 (0.004849) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.092184 / 0.014526 (0.077659) | 0.102987 / 0.176557 (-0.073570) | 0.176987 / 0.737135 (-0.560149) | 0.103093 / 0.296338 (-0.193246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578410 / 0.215209 (0.363201) | 5.664781 / 2.077655 (3.587126) | 2.487763 / 1.504120 (0.983643) | 2.254213 / 1.541195 (0.713018) | 2.239693 / 1.468490 (0.771202) | 0.810380 / 4.584777 (-3.774397) | 5.036540 / 3.745712 (1.290828) | 7.064695 / 5.269862 (1.794834) | 4.215101 / 4.565676 (-0.350575) | 0.089792 / 0.424275 (-0.334483) | 0.008487 / 0.007607 (0.000879) | 0.692292 / 0.226044 (0.466248) | 6.780226 / 2.268929 (4.511297) | 3.245510 / 55.444624 (-52.199114) | 2.575984 / 6.876477 (-4.300493) | 2.747546 / 2.142072 (0.605473) | 0.956604 / 4.805227 (-3.848623) | 0.198937 / 6.500664 (-6.301727) | 0.070849 / 0.075469 (-0.004620) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.536469 / 1.841788 (-0.305319) | 21.750583 / 8.074308 (13.676275) | 20.559532 / 10.191392 (10.368140) | 0.241244 / 0.680424 (-0.439180) | 0.030078 / 0.534201 (-0.504123) | 0.462204 / 0.579283 (-0.117079) | 0.600103 / 0.434364 (0.165739) | 0.535074 / 0.540337 (-0.005264) | 0.764427 / 1.386936 (-0.622509) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009712 / 0.011353 (-0.001641) | 0.005036 / 0.011008 (-0.005972) | 0.073683 / 0.038508 (0.035175) | 0.078684 / 0.023109 (0.055574) | 0.445096 / 0.275898 (0.169198) | 0.496233 / 0.323480 (0.172754) | 0.006231 / 0.007986 (-0.001755) | 0.004720 / 0.004328 (0.000392) | 0.076444 / 0.004250 (0.072194) | 0.060932 / 0.037052 (0.023880) | 0.505727 / 0.258489 (0.247238) | 0.498702 / 0.293841 (0.204861) | 0.047115 / 0.128546 (-0.081431) | 0.014028 / 0.075646 (-0.061618) | 0.099292 / 0.419271 (-0.319980) | 0.061571 / 0.043533 (0.018038) | 0.468435 / 0.255139 (0.213296) | 0.481747 / 0.283200 (0.198547) | 0.033962 / 0.141683 (-0.107721) | 1.665397 / 1.452155 (0.213242) | 1.830488 / 1.492716 (0.337772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268217 / 0.018006 (0.250211) | 0.555123 / 0.000490 (0.554633) | 0.000451 / 0.000200 (0.000251) | 0.000156 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034262 / 0.037411 (-0.003150) | 0.107807 / 0.014526 (0.093281) | 0.115631 / 0.176557 (-0.060926) | 0.175914 / 0.737135 (-0.561221) | 0.118775 / 0.296338 (-0.177564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583260 / 0.215209 (0.368051) | 5.934976 / 2.077655 (3.857321) | 2.752304 / 1.504120 (1.248184) | 2.382746 / 1.541195 (0.841551) | 2.389402 / 1.468490 (0.920912) | 0.794213 / 4.584777 (-3.790564) | 5.215269 / 3.745712 (1.469557) | 7.083595 / 5.269862 (1.813733) | 3.776136 / 4.565676 (-0.789540) | 0.091141 / 0.424275 (-0.333135) | 0.008803 / 0.007607 (0.001196) | 0.726510 / 0.226044 (0.500465) | 6.926860 / 2.268929 (4.657931) | 3.475612 / 55.444624 (-51.969012) | 2.730237 / 6.876477 (-4.146240) | 2.879145 / 2.142072 (0.737073) | 0.959956 / 4.805227 (-3.845271) | 0.189812 / 6.500664 (-6.310852) | 0.071624 / 0.075469 (-0.003845) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748184 / 1.841788 (-0.093603) | 23.764520 / 8.074308 (15.690212) | 19.502461 / 10.191392 (9.311069) | 0.233987 / 0.680424 (-0.446437) | 0.028116 / 0.534201 (-0.506085) | 0.478838 / 0.579283 (-0.100445) | 0.560952 / 0.434364 (0.126588) | 0.529902 / 0.540337 (-0.010435) | 0.735095 / 1.386936 (-0.651841) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dda3e389212f44117a40b44bb0cdf358cfd9f71e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006735 / 0.011353 (-0.004618) | 0.004131 / 0.011008 (-0.006878) | 0.085619 / 0.038508 (0.047111) | 0.076973 / 0.023109 (0.053864) | 0.315175 / 0.275898 (0.039277) | 0.354703 / 0.323480 (0.031223) | 0.005409 / 0.007986 (-0.002577) | 0.003438 / 0.004328 (-0.000891) | 0.064773 / 0.004250 (0.060523) | 0.056117 / 0.037052 (0.019064) | 0.313825 / 0.258489 (0.055336) | 0.354654 / 0.293841 (0.060813) | 0.031384 / 0.128546 (-0.097163) | 0.008537 / 0.075646 (-0.067109) | 0.288528 / 0.419271 (-0.130744) | 0.053036 / 0.043533 (0.009504) | 0.312213 / 0.255139 (0.057074) | 0.335952 / 0.283200 (0.052752) | 0.023165 / 0.141683 (-0.118518) | 1.497559 / 1.452155 (0.045404) | 1.561949 / 1.492716 (0.069233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212558 / 0.018006 (0.194552) | 0.456555 / 0.000490 (0.456065) | 0.000334 / 0.000200 (0.000134) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028571 / 0.037411 (-0.008840) | 0.085154 / 0.014526 (0.070628) | 0.095961 / 0.176557 (-0.080596) | 0.153041 / 0.737135 (-0.584094) | 0.099234 / 0.296338 (-0.197105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381796 / 0.215209 (0.166587) | 3.806948 / 2.077655 (1.729294) | 1.829597 / 1.504120 (0.325477) | 1.659065 / 1.541195 (0.117870) | 1.738524 / 1.468490 (0.270034) | 0.483379 / 4.584777 (-4.101398) | 3.540648 / 3.745712 (-0.205064) | 3.269188 / 5.269862 (-2.000673) | 2.042113 / 4.565676 (-2.523564) | 0.056905 / 0.424275 (-0.367370) | 0.007235 / 0.007607 (-0.000373) | 0.460581 / 0.226044 (0.234537) | 4.597451 / 2.268929 (2.328522) | 2.334284 / 55.444624 (-53.110340) | 1.960026 / 6.876477 (-4.916450) | 2.172118 / 2.142072 (0.030045) | 0.576758 / 4.805227 (-4.228470) | 0.131196 / 6.500664 (-6.369468) | 0.060053 / 0.075469 (-0.015417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289466 / 1.841788 (-0.552322) | 19.713059 / 8.074308 (11.638750) | 14.292390 / 10.191392 (4.100998) | 0.146199 / 0.680424 (-0.534225) | 0.018123 / 0.534201 (-0.516078) | 0.392492 / 0.579283 (-0.186791) | 0.416544 / 0.434364 (-0.017820) | 0.457166 / 0.540337 (-0.083171) | 0.645490 / 1.386936 (-0.741446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006508 / 0.011353 (-0.004845) | 0.004010 / 0.011008 (-0.006998) | 0.065201 / 0.038508 (0.026693) | 0.076322 / 0.023109 (0.053213) | 0.364198 / 0.275898 (0.088300) | 0.398251 / 0.323480 (0.074771) | 0.005328 / 0.007986 (-0.002658) | 0.003298 / 0.004328 (-0.001031) | 0.064378 / 0.004250 (0.060128) | 0.056053 / 0.037052 (0.019000) | 0.365431 / 0.258489 (0.106942) | 0.402777 / 0.293841 (0.108936) | 0.031014 / 0.128546 (-0.097532) | 0.008507 / 0.075646 (-0.067140) | 0.071471 / 0.419271 (-0.347801) | 0.048300 / 0.043533 (0.004768) | 0.359700 / 0.255139 (0.104561) | 0.382244 / 0.283200 (0.099044) | 0.023783 / 0.141683 (-0.117900) | 1.517518 / 1.452155 (0.065363) | 1.569732 / 1.492716 (0.077015) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257447 / 0.018006 (0.239440) | 0.452598 / 0.000490 (0.452109) | 0.015187 / 0.000200 (0.014987) | 0.000164 / 0.000054 (0.000109) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030958 / 0.037411 (-0.006454) | 0.090066 / 0.014526 (0.075540) | 0.101120 / 0.176557 (-0.075437) | 0.154295 / 0.737135 (-0.582840) | 0.103582 / 0.296338 (-0.192756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415945 / 0.215209 (0.200736) | 4.146464 / 2.077655 (2.068809) | 2.121414 / 1.504120 (0.617294) | 1.956885 / 1.541195 (0.415690) | 2.047955 / 1.468490 (0.579465) | 0.486334 / 4.584777 (-4.098443) | 3.506263 / 3.745712 (-0.239449) | 4.942274 / 5.269862 (-0.327587) | 2.907836 / 4.565676 (-1.657841) | 0.057344 / 0.424275 (-0.366931) | 0.007813 / 0.007607 (0.000206) | 0.497888 / 0.226044 (0.271844) | 4.978017 / 2.268929 (2.709089) | 2.600447 / 55.444624 (-52.844177) | 2.335050 / 6.876477 (-4.541427) | 2.480373 / 2.142072 (0.338301) | 0.597954 / 4.805227 (-4.207274) | 0.134794 / 6.500664 (-6.365870) | 0.062605 / 0.075469 (-0.012864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344390 / 1.841788 (-0.497398) | 20.020067 / 8.074308 (11.945759) | 14.344626 / 10.191392 (4.153234) | 0.172101 / 0.680424 (-0.508322) | 0.018549 / 0.534201 (-0.515652) | 0.393589 / 0.579283 (-0.185694) | 0.438401 / 0.434364 (0.004037) | 0.463800 / 0.540337 (-0.076537) | 0.618269 / 1.386936 (-0.768667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0177910b32712f28d147879395e511207e39958 \"CML watermark\")\n" ]
"2023-07-25T17:48:37Z"
"2023-07-25T18:19:01Z"
"2023-07-25T18:10:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "html_url": "https://github.com/huggingface/datasets/pull/6070", "merged_at": "2023-07-25T18:10:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070" }
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2122/comments
https://api.github.com/repos/huggingface/datasets/issues/2122/events
https://github.com/huggingface/datasets/pull/2122
842,194,588
MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0
2,122
Fast table queries with interpolation search
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-03-26T18:09:20Z"
"2021-08-04T18:11:59Z"
"2021-04-06T14:33:01Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2122.diff", "html_url": "https://github.com/huggingface/datasets/pull/2122", "merged_at": "2021-04-06T14:33:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2122.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2122" }
## Intro This should fix issue #1803 Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation. To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed). ## Benchmark Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows): for the current implementation ```python >>> python speed.py Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766 ========================= Querying unshuffled bookcorpus ========================= Avg access time key=1 : 0.018ms Avg access time key=74004227 : 0.215ms Avg access time key=range(74003204, 74004228) : 1.416ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms ========================== Querying shuffled bookcorpus ========================== Avg access time key=1 : 0.187ms Avg access time key=74004227 : 6.642ms Avg access time key=range(74003204, 74004228) : 90.941ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms ``` for the new one using interpolation search: ```python >>> python speed.py Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766 ========================= Querying unshuffled bookcorpus ========================= Avg access time key=1 : 0.076ms Avg access time key=74004227 : 0.056ms Avg access time key=range(74003204, 74004228) : 1.807ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms ========================== Querying shuffled bookcorpus ========================== Avg access time key=1 : 0.061ms Avg access time key=74004227 : 0.058ms Avg access time key=range(74003204, 74004228) : 22.166ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms ``` The RandIter class is just an iterable of 1024 random indices from 0 to 74004228. Here is also a plot showing the speed improvement depending on the dataset size: ![image](https://user-images.githubusercontent.com/42851186/112673587-32335c80-8e65-11eb-9a0c-58ad774abaec.png) ## Implementation details: - `datasets.table.Table` objects implement interpolation search for the `slice` method - The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized. - `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search - `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary. - Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table` ## Checklist: - [x] implement interpolation search - [x] use `datasets.table.Table` in `Dataset` objects - [x] update current tests - [x] add tests for interpolation search - [x] comments and docstring - [x] add the benchmark to the CI Fix #1803.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2122/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/747/comments
https://api.github.com/repos/huggingface/datasets/issues/747/events
https://github.com/huggingface/datasets/pull/747
725,884,704
MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4
747
Add Quail question answering dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sai-prasanna", "id": 3595526, "login": "sai-prasanna", "node_id": "MDQ6VXNlcjM1OTU1MjY=", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "type": "User", "url": "https://api.github.com/users/sai-prasanna" }
[]
closed
false
null
[]
null
[]
"2020-10-20T19:33:14Z"
"2020-10-21T08:35:15Z"
"2020-10-21T08:35:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/747.diff", "html_url": "https://github.com/huggingface/datasets/pull/747", "merged_at": "2020-10-21T08:35:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/747" }
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019). https://text-machine-lab.github.io/blog/2020/quail/ @annargrs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/747/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2802/comments
https://api.github.com/repos/huggingface/datasets/issues/2802/events
https://github.com/huggingface/datasets/pull/2802
970,848,302
MDExOlB1bGxSZXF1ZXN0NzEyNzM0MTc3
2,802
add openwebtext2
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
null
[]
null
[ "It seems we need to `pip install jsonlines` to pass the checks ?", "Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replace `jsonlines` with a simple for loop on the lines of the files and use `json.loads`, or you can add `TESTS_REQUIRE` to the test requirements (but in this case users will have to install it as well).", "Thanks for your suggestion. I now know `io` and json lines format better and has changed `jsonlines` to just `readlines`." ]
"2021-08-14T07:09:03Z"
"2021-08-23T14:06:14Z"
"2021-08-23T14:06:14Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2802.diff", "html_url": "https://github.com/huggingface/datasets/pull/2802", "merged_at": "2021-08-23T14:06:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2802" }
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2802/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2802/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4953/comments
https://api.github.com/repos/huggingface/datasets/issues/4953/events
https://github.com/huggingface/datasets/issues/4953
1,366,356,514
I_kwDODunzps5RcPIi
4,953
CI test of TensorFlow is failing
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2022-09-08T13:39:29Z"
"2022-09-08T15:14:45Z"
"2022-09-08T15:14:45Z"
MEMBER
null
null
null
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers def gen_random_output(): model = layers.Dense(2) x = tf.random.uniform((1, 3)) return model(x).numpy() with temp_seed(42, set_tensorflow=True): out1 = gen_random_output() with temp_seed(42, set_tensorflow=True): out2 = gen_random_output() out3 = gen_random_output() > np.testing.assert_equal(out1, out2) E AssertionError: E Arrays are not equal E E Mismatched elements: 2 / 2 (100%) E Max absolute difference: 0.84619296 E Max relative difference: 16.083529 E x: array([[-0.793581, 0.333286]], dtype=float32) E y: array([[0.052612, 0.539708]], dtype=float32) tests/test_py_utils.py:149: AssertionError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4953/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3238/comments
https://api.github.com/repos/huggingface/datasets/issues/3238/events
https://github.com/huggingface/datasets/issues/3238
1,048,226,086
I_kwDODunzps4-eqkm
3,238
Reuters21578 Couldn't reach
{ "avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4", "events_url": "https://api.github.com/users/TingNLP/events{/privacy}", "followers_url": "https://api.github.com/users/TingNLP/followers", "following_url": "https://api.github.com/users/TingNLP/following{/other_user}", "gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TingNLP", "id": 54096137, "login": "TingNLP", "node_id": "MDQ6VXNlcjU0MDk2MTM3", "organizations_url": "https://api.github.com/users/TingNLP/orgs", "received_events_url": "https://api.github.com/users/TingNLP/received_events", "repos_url": "https://api.github.com/users/TingNLP/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions", "type": "User", "url": "https://api.github.com/users/TingNLP" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi ! The URL works fine on my side today, could you try again ?", "thank you @lhoestq \r\nit works" ]
"2021-11-09T06:08:56Z"
"2021-11-11T00:02:57Z"
"2021-11-11T00:02:57Z"
NONE
null
null
null
``## Adding a Dataset - **Name:** *Reuters21578* - **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz* - **Data:** *https://huggingface.co/datasets/reuters21578* `from datasets import load_dataset` `dataset = load_dataset("reuters21578", 'ModLewis')` ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz And I try to request the link as follow: `import requests` `requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')` SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) This problem likes #575 What should I do ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3238/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1961/comments
https://api.github.com/repos/huggingface/datasets/issues/1961/events
https://github.com/huggingface/datasets/pull/1961
818,077,947
MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0
1,961
Add sst dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "events_url": "https://api.github.com/users/patpizio/events{/privacy}", "followers_url": "https://api.github.com/users/patpizio/followers", "following_url": "https://api.github.com/users/patpizio/following{/other_user}", "gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patpizio", "id": 15801338, "login": "patpizio", "node_id": "MDQ6VXNlcjE1ODAxMzM4", "organizations_url": "https://api.github.com/users/patpizio/orgs", "received_events_url": "https://api.github.com/users/patpizio/received_events", "repos_url": "https://api.github.com/users/patpizio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patpizio/subscriptions", "type": "User", "url": "https://api.github.com/users/patpizio" }
[]
closed
false
null
[]
null
[]
"2021-02-28T02:08:29Z"
"2021-03-04T10:38:53Z"
"2021-03-04T10:38:53Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1961.diff", "html_url": "https://github.com/huggingface/datasets/pull/1961", "merged_at": "2021-03-04T10:38:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1961.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1961" }
Related to #1934&mdash;Add the Stanford Sentiment Treebank dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1961/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3208/comments
https://api.github.com/repos/huggingface/datasets/issues/3208/events
https://github.com/huggingface/datasets/pull/3208
1,044,504,093
PR_kwDODunzps4uFTIs
3,208
Pin keras version until TF fixes its release
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-11-04T09:13:32Z"
"2021-11-04T09:30:55Z"
"2021-11-04T09:30:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3208.diff", "html_url": "https://github.com/huggingface/datasets/pull/3208", "merged_at": "2021-11-04T09:30:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3208" }
Fix #3207.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3208/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/617/comments
https://api.github.com/repos/huggingface/datasets/issues/617/events
https://github.com/huggingface/datasets/issues/617
699,472,596
MDU6SXNzdWU2OTk0NzI1OTY=
617
Compare different Rouge implementations
{ "avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4", "events_url": "https://api.github.com/users/ibeltagy/events{/privacy}", "followers_url": "https://api.github.com/users/ibeltagy/followers", "following_url": "https://api.github.com/users/ibeltagy/following{/other_user}", "gists_url": "https://api.github.com/users/ibeltagy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ibeltagy", "id": 2287797, "login": "ibeltagy", "node_id": "MDQ6VXNlcjIyODc3OTc=", "organizations_url": "https://api.github.com/users/ibeltagy/orgs", "received_events_url": "https://api.github.com/users/ibeltagy/received_events", "repos_url": "https://api.github.com/users/ibeltagy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ibeltagy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ibeltagy/subscriptions", "type": "User", "url": "https://api.github.com/users/ibeltagy" }
[]
closed
false
null
[]
null
[ "Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two things, stemming and handling multiple sentences.\r\n\r\nStemming: \r\n(1), (2): default is no stemming. (3): default is with stemming ==> No stemming is the correct default as you did [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L84)\r\n\r\nMultiple sentences:\r\n(1) `rougeL` splits text using `\\n`\r\n(2) `rougeL` ignores `\\n`. \r\n(2) `rougeLsum` splits text using `\\n`\r\n(3) `rougeL` splits text using `.`\r\n\r\nFor (2), `rougeL` and `rougeLsum` are identical if the sequence doesn't contain `\\n`. With `\\n`, it is `rougeLsum` that matches (1) not `rougeL`. \r\n\r\nOverall, and as far as I understand, for your implementation here https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L65 to match the default, you only need to change `rougeL` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py#L86) to `rougeLsum` to correctly compute metrics for text with newlines.\r\n\r\nTagging @sshleifer who might be interested.", "Thanks for the clarification !\r\nWe're adding Rouge Lsum in #701 ", "This is a real issue, sorry for missing the mention @ibeltagy\r\n\r\nWe implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines. \r\n\r\nUnfortunately, the best/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\n#### Sidebar: Wouldn't Deterministic Be Better?\r\n\r\n`rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n\r\nI have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\n", "> This is a real issue, sorry for missing the mention @ibeltagy\r\n> \r\n> We implemented a more involved [solution](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines.\r\n> \r\n> Unfortunately, the best/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\nThanks for the details, I didn't know about that. Maybe we should consider adding this processing step or at least mention it somewhere in the library or the documentation\r\n\r\n> #### Sidebar: Wouldn't Deterministic Be Better?\r\n> `rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n> \r\n> I have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\nI think the default `n_samples` of the aggregator is 1000. We could increase it or at least allow users to change it if they want more precise results.", "Hi, thanks for the solution. \r\n\r\nI am not sure if this is a bug, but on line [510](https://github.com/huggingface/transformers/blob/99cb924bfb6c4092bed9232bea3c242e27c6911f/examples/seq2seq/utils.py#L510), are pred, tgt supposed to be swapped?", "This looks like a bug in an old version of the examples in `transformers`", "Hi, so I took this example from the HF implementation. What I can see is that the precision of `Hello there` being summarized to `general kenobi` is 1. I don't understand how this calculation is correct.\r\nIs the comparison just counting the words?\r\nand if Yes, then how does this translates to summarization evaluation?\r\n```\r\n >>> rouge = datasets.load_metric('rouge')\r\n >>> predictions = [\"hello there\", \"general kenobi\"]\r\n >>> references = [\"hello there\", \"general kenobi\"]\r\n >>> results = rouge.compute(predictions=predictions, references=references)\r\n >>> print(list(results.keys()))\r\n ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']\r\n >>> print(results[\"rouge1\"])\r\n AggregateScore(low=Score(precision=1.0, recall=1.0, fmeasure=1.0), mid=Score(precision=1.0, recall=1.0, fmeasure=1.0), high=Score(precision=1.0, recall=1.0, fmeasure=1.0))\r\n >>> print(results[\"rouge1\"].mid.fmeasure)\r\n 1.0\r\n\"\"\", stored examples: 0)\r\n```\r\n\r\n\r\n" ]
"2020-09-11T15:49:32Z"
"2023-03-22T12:08:44Z"
"2020-10-02T09:52:18Z"
NONE
null
null
null
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Can you make sure the google-research implementation you are using matches the official perl implementation? There are a couple of python wrappers around the perl implementation, [this](https://pypi.org/project/pyrouge/) has been commonly used, and [this](https://github.com/pltrdy/files2rouge) is used in fairseq). There's also a python reimplementation [here](https://github.com/pltrdy/rouge) but its RougeL numbers are way off.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/617/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6467/comments
https://api.github.com/repos/huggingface/datasets/issues/6467/events
https://github.com/huggingface/datasets/issues/6467
2,023,174,233
I_kwDODunzps54lzBZ
6,467
New version release request
{ "avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4", "events_url": "https://api.github.com/users/LZHgrla/events{/privacy}", "followers_url": "https://api.github.com/users/LZHgrla/followers", "following_url": "https://api.github.com/users/LZHgrla/following{/other_user}", "gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZHgrla", "id": 36994684, "login": "LZHgrla", "node_id": "MDQ6VXNlcjM2OTk0Njg0", "organizations_url": "https://api.github.com/users/LZHgrla/orgs", "received_events_url": "https://api.github.com/users/LZHgrla/received_events", "repos_url": "https://api.github.com/users/LZHgrla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions", "type": "User", "url": "https://api.github.com/users/LZHgrla" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "We will publish it soon (we usually do it in intervals of 1-2 months, so probably next week)", "Thanks!" ]
"2023-12-04T07:08:26Z"
"2023-12-04T15:42:22Z"
"2023-12-04T15:42:22Z"
CONTRIBUTOR
null
null
null
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us? Thanks very much! ### Motivation . ### Your contribution .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6467/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2485/comments
https://api.github.com/repos/huggingface/datasets/issues/2485/events
https://github.com/huggingface/datasets/issues/2485
919,099,218
MDU6SXNzdWU5MTkwOTkyMTg=
2,485
Implement layered building
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2021-06-11T18:54:25Z"
"2021-06-11T18:54:25Z"
null
MEMBER
null
null
null
As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190): > My suggestion for this would be to have this enabled by default. > > Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is: > > 1. uncompress a handful of files via a generator enough to generate one arrow file > 2. process arrow file 1 > 3. delete all the files that went in and aren't needed anymore. > > rinse and repeat. > > 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project > 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing > 3. It would already include deleting temp files this issue is talking about > > I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2485/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/86
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/86/comments
https://api.github.com/repos/huggingface/datasets/issues/86/events
https://github.com/huggingface/datasets/pull/86
617,260,972
MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2
86
[Load => load_dataset] change naming
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-13T08:43:00Z"
"2020-05-13T08:50:58Z"
"2020-05-13T08:50:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/86.diff", "html_url": "https://github.com/huggingface/datasets/pull/86", "merged_at": "2020-05-13T08:50:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/86.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/86" }
Rename leftovers @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/86/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/86/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2593/comments
https://api.github.com/repos/huggingface/datasets/issues/2593/events
https://github.com/huggingface/datasets/pull/2593
937,242,137
MDExOlB1bGxSZXF1ZXN0NjgzODMwMjcy
2,593
Support pandas 1.3.0 read_csv
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-07-05T16:40:04Z"
"2021-07-05T17:14:14Z"
"2021-07-05T17:14:14Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2593.diff", "html_url": "https://github.com/huggingface/datasets/pull/2593", "merged_at": "2021-07-05T17:14:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2593.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2593" }
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults) 1304 1305 if names is not lib.no_default and prefix is not lib.no_default: -> 1306 raise ValueError("Specified named and prefix; you can only specify one.") 1307 1308 kwds["names"] = None if names is lib.no_default else names ValueError: Specified named and prefix; you can only specify one. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2593/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2593/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1464/comments
https://api.github.com/repos/huggingface/datasets/issues/1464/events
https://github.com/huggingface/datasets/pull/1464
761,533,566
MDExOlB1bGxSZXF1ZXN0NTM2MTg3MDA0
1,464
Reddit jokes
{ "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanmoyio", "id": 33005287, "login": "tanmoyio", "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "repos_url": "https://api.github.com/users/tanmoyio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "type": "User", "url": "https://api.github.com/users/tanmoyio" }
[]
closed
false
null
[]
null
[ "@lhoestq would you please rerun the test, ", "I re-started the test.\r\n\r\n@lhoestq let's hold off on merging for now though, having a conversation on Slack about some of the offensive content in the dataset and how/whether we want to present it." ]
"2020-12-10T19:15:19Z"
"2020-12-10T20:14:00Z"
"2020-12-10T20:14:00Z"
CONTRIBUTOR
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/1464.diff", "html_url": "https://github.com/huggingface/datasets/pull/1464", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1464" }
196k Reddit Jokes dataset Dataset link- https://raw.githubusercontent.com/taivop/joke-dataset/master/reddit_jokes.json
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1464/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/903/comments
https://api.github.com/repos/huggingface/datasets/issues/903/events
https://github.com/huggingface/datasets/pull/903
752,360,614
MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3
903
Fix URL with backslash in Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "@lhoestq I was indeed working on that... to make another commit on this feature branch...", "But as you prefer... nevermind! :)", "Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion", "Indeed I was thinking of something similar: monckeypatching the HTTP request...", "Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...", "If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (/src/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.", "Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354", "I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now" ]
"2020-11-27T16:26:24Z"
"2020-11-27T18:04:46Z"
"2020-11-27T18:04:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/903.diff", "html_url": "https://github.com/huggingface/datasets/pull/903", "merged_at": "2020-11-27T18:04:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/903" }
In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash. In general, `os.path.join` should be avoided to generate URLs.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1507/comments
https://api.github.com/repos/huggingface/datasets/issues/1507/events
https://github.com/huggingface/datasets/pull/1507
763,857,872
MDExOlB1bGxSZXF1ZXN0NTM4MTgyMzE2
1,507
Add SelQA Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharatr21", "id": 13381361, "login": "bharatr21", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "repos_url": "https://api.github.com/users/bharatr21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "type": "User", "url": "https://api.github.com/users/bharatr21" }
[]
closed
false
null
[]
null
[ "Hii please follow me", "The CI error `FAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related with this dataset and is fixed on master. You can ignore it", "merging since the Ci is fixed on master" ]
"2020-12-12T13:58:07Z"
"2020-12-16T16:49:23Z"
"2020-12-16T16:49:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1507.diff", "html_url": "https://github.com/huggingface/datasets/pull/1507", "merged_at": "2020-12-16T16:49:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/1507.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1507" }
Add the SelQA Dataset, a new benchmark for selection-based question answering tasks Repo: https://github.com/emorynlp/selqa/ Paper: https://arxiv.org/pdf/1606.08513.pdf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1507/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1507/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/873/comments
https://api.github.com/repos/huggingface/datasets/issues/873/events
https://github.com/huggingface/datasets/issues/873
747,959,523
MDU6SXNzdWU3NDc5NTk1MjM=
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
{ "avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4", "events_url": "https://api.github.com/users/vishal-burman/events{/privacy}", "followers_url": "https://api.github.com/users/vishal-burman/followers", "following_url": "https://api.github.com/users/vishal-burman/following{/other_user}", "gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vishal-burman", "id": 19861874, "login": "vishal-burman", "node_id": "MDQ6VXNlcjE5ODYxODc0", "organizations_url": "https://api.github.com/users/vishal-burman/orgs", "received_events_url": "https://api.github.com/users/vishal-burman/received_events", "repos_url": "https://api.github.com/users/vishal-burman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions", "type": "User", "url": "https://api.github.com/users/vishal-burman" }
[]
closed
false
null
[]
null
[ "I get the same error. It was fixed some days ago, but again it appears", "Hi @mrm8488 it's working again today without any fix so I am closing this issue.", "I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?", "Sometimes happens. Try in a while", "It is working now, thank you. ", "Has anyone solved this ? I still get this error ", "> atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> \r\n> NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> \r\n> Can someone please take a look ?\r\n\r\n2 short-term workarounds:\r\n\r\n1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n\r\nEither method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.", "experience the same problem, ccdv/cnn_dailymail not working either. \r\n\r\nSolve this problem by installing datasets library from the master branch:\r\npython -m pip install git+https://github.com/huggingface/datasets.git@master", "Seem to be getting this again even with 1.18.4. I believe it worked yesterday.", "Hitting this one as well.", ">Hitting this one as well.\r\n\r\nHas anyone solved this ? I still get this error", "@yoheimiyamoto The solution provided by @davidshinn (i.e. `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`) worked for me.", "> > atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> > Can someone please take a look ?\r\n> \r\n> 2 short-term workarounds:\r\n> \r\n> 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n> 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in [NotADirectoryError while loading the CNN/Dailymail dataset #996](https://github.com/huggingface/datasets/issues/996), I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n> \r\n> 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n> 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n> 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n> \r\n> Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.\r\n\r\nThankyou, editing the urls helped me than the loading dataset line." ]
"2020-11-21T06:30:45Z"
"2023-08-03T12:07:03Z"
"2020-11-22T12:18:05Z"
NONE
null
null
null
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/873/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4126/comments
https://api.github.com/repos/huggingface/datasets/issues/4126/events
https://github.com/huggingface/datasets/issues/4126
1,196,665,194
I_kwDODunzps5HU6lq
4,126
dataset viewer issue for common_voice
{ "avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4", "events_url": "https://api.github.com/users/laphang/events{/privacy}", "followers_url": "https://api.github.com/users/laphang/followers", "following_url": "https://api.github.com/users/laphang/following{/other_user}", "gists_url": "https://api.github.com/users/laphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/laphang", "id": 24724502, "login": "laphang", "node_id": "MDQ6VXNlcjI0NzI0NTAy", "organizations_url": "https://api.github.com/users/laphang/orgs", "received_events_url": "https://api.github.com/users/laphang/received_events", "repos_url": "https://api.github.com/users/laphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laphang/subscriptions", "type": "User", "url": "https://api.github.com/users/laphang" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" }, { "color": "F83ACF", "default": false, "description": "", "id": 4027368468, "name": "audio_column", "node_id": "LA_kwDODunzps7wDMQU", "url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "Yes, it's a known issue, and we expect to fix it soon.", "Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n" ]
"2022-04-07T23:34:28Z"
"2022-04-25T13:42:17Z"
"2022-04-25T13:42:16Z"
NONE
null
null
null
## Dataset viewer issue for 'common_voice' **Link:** https://huggingface.co/datasets/common_voice Server Error Status code: 400 Exception: TypeError Message: __init__() got an unexpected keyword argument 'audio_column' Am I the one who added this dataset ? No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4126/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/884/comments
https://api.github.com/repos/huggingface/datasets/issues/884/events
https://github.com/huggingface/datasets/pull/884
749,862,034
MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1
884
Auto generate dummy data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)", "I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_and_extract(file_url)` (where file_url is a `str`). That's because in that case the dummy_data.zip is not a folder but a single zipped file.\r\n\r\nI think we have to fix that or we can have unexpected behavior when a scripts calls `download_and_extract(file_url)` several times, since it would always point to the same dummy data file.\r\n\r\nSo I decided to change that to have a folder containing the dummy files instead but it breaks around 90 tests so I need to update 90 dummy data files to follow this scheme. I'll probably fix them tomorrow morning.\r\n\r\nWhat do you guys think ? Also cc @patrickvonplaten to make sure I understand things correctly", "Ok I changed to use the dummy_data.zip content to be a folder even for single url calls to `dl_manager.download_and_extract`. Therefore the automatic dummy data generation tool works for most datasets now.\r\n\r\nTo avoid having to change all the old dummy_data.zip files I added backward compatiblity. \r\n\r\nThe only test failing is `tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xcopa`\r\nIt is expected to fail since I had modify its dummy data structure that was wrong. It was causing issue with backward compatibility. It will be fixed as soon as this PR is merged" ]
"2020-11-24T16:31:34Z"
"2020-11-26T14:18:47Z"
"2020-11-26T14:18:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/884.diff", "html_url": "https://github.com/huggingface/datasets/pull/884", "merged_at": "2020-11-26T14:18:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/884.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/884" }
When adding a new dataset to the library, dummy data creation can take some time. To make things easier I added a command line tool that automatically generates dummy data when possible. The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml. Here are some examples: ``` python datasets-cli dummy_data ./datasets/snli --auto_generate python datasets-cli dummy_data ./datasets/squad --auto_generate --json_field data python datasets-cli dummy_data ./datasets/iwslt2017 --auto_generate --xml_tag seg --match_text_files "train*" --n_lines 15 # --xml_tag seg => each sample corresponds to a "seg" tag in the xml tree # --match_text_files "train*" => also match text files that don't have a proper text file extension (no suffix like ".txt" for example) # --n_lines 15 => some text files have headers so we have to use at least 15 lines ``` and here is the command usage: ``` usage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate] [--n_lines N_LINES] [--json_field JSON_FIELD] [--xml_tag XML_TAG] [--match_text_files MATCH_TEXT_FILES] [--keep_uncompressed] [--cache_dir CACHE_DIR] path_to_dataset positional arguments: path_to_dataset Path to the dataset (example: ./datasets/squad) optional arguments: -h, --help show this help message and exit --auto_generate Try to automatically generate dummy data --n_lines N_LINES Number of lines or samples to keep when auto- generating dummy data --json_field JSON_FIELD Optional, json field to read the data from when auto- generating dummy data. In the json data files, this field must point to a list of samples as json objects (ex: the 'data' field for squad-like files) --xml_tag XML_TAG Optional, xml tag name of the samples inside the xml files when auto-generating dummy data. --match_text_files MATCH_TEXT_FILES Optional, a comma separated list of file patterns that looks for line-by-line text files other than *.txt or *.csv. Example: --match_text_files *.label --keep_uncompressed Don't compress the dummy data folders when auto- generating dummy data. Useful for debugging for to do manual adjustements before compressing. --cache_dir CACHE_DIR Cache directory to download and cache files when auto- generating dummy data ``` The command generates all the necessary `dummy_data.zip` files (one per config). How it works: - it runs the split_generators() method of the dataset script to download the original data files - when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths - then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file) - finally it compresses the dummy data folders into dummy_zip files ready for dataset tests Let me know if that makes sense or if you have ideas to improve this tool ! I also added a unit test.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/884/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/884/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
https://api.github.com/repos/huggingface/datasets/issues/5745/events
https://github.com/huggingface/datasets/pull/5745
1,667,086,143
PR_kwDODunzps5ORE2n
5,745
[BUG FIX] Issue 5744
{ "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keyboardAnt", "id": 15572698, "login": "keyboardAnt", "node_id": "MDQ6VXNlcjE1NTcyNjk4", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "type": "User", "url": "https://api.github.com/users/keyboardAnt" }
[]
open
false
null
[]
null
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter", "`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine" ]
"2023-04-13T20:29:55Z"
"2023-04-21T15:22:43Z"
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "html_url": "https://github.com/huggingface/datasets/pull/5745", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745" }
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
null
null
true