url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6391/comments
https://api.github.com/repos/huggingface/datasets/issues/6391/events
https://github.com/huggingface/datasets/pull/6391
1,984,091,776
PR_kwDODunzps5e9BDO
6,391
Webdataset dataset builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I added an error message if the first examples don't appear to be in webdataset format\r\n```\r\n\"The TAR archives of the dataset should be in Webdataset format, \"\r\n\"but the files in the archive don't share the same prefix or the same types.\"\r\n```", "@mariosasko could you review this ? I think it's fine to have webdataset as an optional dependency for now, then depending on usage and user feedbacks see if it makes sense to have our own implementation or not", "I just removed the dependency on `webdataset` @mariosasko :)", "took your comments into account, lmk if you see anything else" ]
"2023-11-08T17:31:59Z"
"2023-11-28T16:33:33Z"
"2023-11-28T16:33:10Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6391.diff", "html_url": "https://github.com/huggingface/datasets/pull/6391", "merged_at": "2023-11-28T16:33:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/6391.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6391" }
Allow `load_dataset` to support the Webdataset format. It allows users to download/stream data from local files or from the Hugging Face Hub. Moreover it will enable the Dataset Viewer for Webdataset datasets on HF. ## Implementation details - I added a new Webdataset builder - dataset with TAR files are now read using the Webdataset builder - Basic decoding from `webdataset` is used by default, except unsafe ones like pickle - HF authentication support is done by registering a `webdataset.gopen` reader - `webdataset` uses buffering when reading files, so I had to add buffering support in `xopen` ## TODOS - [x] tests - [x] docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6391/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6391/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3299/comments
https://api.github.com/repos/huggingface/datasets/issues/3299/events
https://github.com/huggingface/datasets/issues/3299
1,058,518,213
I_kwDODunzps4_F7TF
3,299
Add option to find unique elements in nested sequences when calling `Dataset.unique`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi @mariosasko!\r\n\r\nHas this been patched into any of the releases?", "Hi! Not yet, would you be interested in contributing a PR? I can give you some pointers if needed. ", "@mariosasko did this ever get implemented? Willing to help if you are still up for it.", "@dcruiz01 No, but here is an example of how to do this with the existing API:\r\n\r\n\r\n```python\r\nds = Dataset.from_dict({\"tokens\": [[\"a\", \"b\"], [\"c\", \"a\"], [\"c\", \"e\"]]})\r\n\r\ndef flatten_tokens(pa_table):\r\n return pa.table([pc.list_flatten(pa_table[\"tokens\"])], [\"flat_tokens\"])\r\n\r\nds = ds.with_format(\"arrow\")\r\nds = ds.map(flatten_tokens, batched=True)\r\nds = ds.with_format(None)\r\n\r\nunique_tokens = ds.unique(\"flat_tokens\")\r\n```\r\n\r\nWhen I think about it, `.unique` on `Sequence(Value(...))` should return unique sequences/arrays, not unique elements of these sequences..." ]
"2021-11-19T13:16:06Z"
"2023-05-19T14:45:40Z"
null
CONTRIBUTOR
null
null
null
It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3299/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4120/comments
https://api.github.com/repos/huggingface/datasets/issues/4120/events
https://github.com/huggingface/datasets/issues/4120
1,195,887,430
I_kwDODunzps5HR8tG
4,120
Representing dictionaries (json) objects as features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4", "events_url": "https://api.github.com/users/yanaiela/events{/privacy}", "followers_url": "https://api.github.com/users/yanaiela/followers", "following_url": "https://api.github.com/users/yanaiela/following{/other_user}", "gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanaiela", "id": 8031035, "login": "yanaiela", "node_id": "MDQ6VXNlcjgwMzEwMzU=", "organizations_url": "https://api.github.com/users/yanaiela/orgs", "received_events_url": "https://api.github.com/users/yanaiela/received_events", "repos_url": "https://api.github.com/users/yanaiela/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions", "type": "User", "url": "https://api.github.com/users/yanaiela" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
"2022-04-07T11:07:41Z"
"2022-04-07T11:07:41Z"
null
CONTRIBUTOR
null
null
null
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). For instance: ``` sample1 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, }} sample2 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, }} sample3 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, "d": {"id": 3, "text": "text4"}, }} ``` the `nps` field cannot be represented as a Feature while maintaining its original structure. @lhoestq suggested to add JSON as a new feature type, which will solve this problem. It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4120/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
https://api.github.com/repos/huggingface/datasets/issues/4907/events
https://github.com/huggingface/datasets/issues/4907
1,353,808,348
I_kwDODunzps5QsXnc
4,907
None Type error for swda datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4", "events_url": "https://api.github.com/users/hannan72/events{/privacy}", "followers_url": "https://api.github.com/users/hannan72/followers", "following_url": "https://api.github.com/users/hannan72/following{/other_user}", "gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hannan72", "id": 8229163, "login": "hannan72", "node_id": "MDQ6VXNlcjgyMjkxNjM=", "organizations_url": "https://api.github.com/users/hannan72/orgs", "received_events_url": "https://api.github.com/users/hannan72/received_events", "repos_url": "https://api.github.com/users/hannan72/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hannan72/subscriptions", "type": "User", "url": "https://api.github.com/users/hannan72" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?", "Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.", "Ok, let us know if you encounter the issue again ;)" ]
"2022-08-29T07:05:20Z"
"2022-08-30T14:43:41Z"
"2022-08-30T14:43:41Z"
NONE
null
null
null
## Describe the bug I got `'NoneType' object is not callable` error while calling the swda datasets. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("swda") ``` ## Expected results Run without error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Python version: 3.8.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3428/comments
https://api.github.com/repos/huggingface/datasets/issues/3428/events
https://github.com/huggingface/datasets/pull/3428
1,078,863,468
PR_kwDODunzps4vxtNT
3,428
Clean squad dummy data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-12-13T18:46:29Z"
"2021-12-13T18:57:50Z"
"2021-12-13T18:57:50Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3428.diff", "html_url": "https://github.com/huggingface/datasets/pull/3428", "merged_at": "2021-12-13T18:57:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/3428.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3428" }
Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3428/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4735/comments
https://api.github.com/repos/huggingface/datasets/issues/4735/events
https://github.com/huggingface/datasets/pull/4735
1,314,501,641
PR_kwDODunzps477CuP
4,735
Pin rouge_score test dependency
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-22T07:18:21Z"
"2022-07-22T07:58:14Z"
"2022-07-22T07:45:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4735.diff", "html_url": "https://github.com/huggingface/datasets/pull/4735", "merged_at": "2022-07-22T07:45:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4735" }
Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed. Fix #4734
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4735/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4735/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5916/comments
https://api.github.com/repos/huggingface/datasets/issues/5916/events
https://github.com/huggingface/datasets/pull/5916
1,732,456,392
PR_kwDODunzps5RskTb
5,916
Unpin responses
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006113 / 0.011353 (-0.005239) | 0.004195 / 0.011008 (-0.006813) | 0.098103 / 0.038508 (0.059595) | 0.027970 / 0.023109 (0.004860) | 0.300992 / 0.275898 (0.025094) | 0.335402 / 0.323480 (0.011922) | 0.005079 / 0.007986 (-0.002906) | 0.003516 / 0.004328 (-0.000813) | 0.077311 / 0.004250 (0.073061) | 0.037863 / 0.037052 (0.000810) | 0.302638 / 0.258489 (0.044149) | 0.346554 / 0.293841 (0.052713) | 0.025218 / 0.128546 (-0.103328) | 0.008630 / 0.075646 (-0.067017) | 0.319748 / 0.419271 (-0.099523) | 0.049182 / 0.043533 (0.005650) | 0.306233 / 0.255139 (0.051094) | 0.331040 / 0.283200 (0.047840) | 0.089203 / 0.141683 (-0.052480) | 1.496104 / 1.452155 (0.043949) | 1.567878 / 1.492716 (0.075162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215774 / 0.018006 (0.197768) | 0.436810 / 0.000490 (0.436320) | 0.000307 / 0.000200 (0.000107) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024102 / 0.037411 (-0.013310) | 0.095459 / 0.014526 (0.080933) | 0.106564 / 0.176557 (-0.069992) | 0.169894 / 0.737135 (-0.567241) | 0.109152 / 0.296338 (-0.187186) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429066 / 0.215209 (0.213857) | 4.297385 / 2.077655 (2.219730) | 2.054854 / 1.504120 (0.550734) | 1.846844 / 1.541195 (0.305649) | 1.840807 / 1.468490 (0.372317) | 0.553193 / 4.584777 (-4.031584) | 3.366788 / 3.745712 (-0.378924) | 1.727337 / 5.269862 (-3.542525) | 0.994357 / 4.565676 (-3.571319) | 0.067790 / 0.424275 (-0.356485) | 0.012002 / 0.007607 (0.004395) | 0.533335 / 0.226044 (0.307291) | 5.341341 / 2.268929 (3.072412) | 2.543581 / 55.444624 (-52.901043) | 2.220374 / 6.876477 (-4.656103) | 2.321656 / 2.142072 (0.179583) | 0.654408 / 4.805227 (-4.150819) | 0.134693 / 6.500664 (-6.365971) | 0.066926 / 0.075469 (-0.008544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209463 / 1.841788 (-0.632325) | 13.568221 / 8.074308 (5.493913) | 13.965418 / 10.191392 (3.774026) | 0.145049 / 0.680424 (-0.535375) | 0.016936 / 0.534201 (-0.517265) | 0.371587 / 0.579283 (-0.207696) | 0.386363 / 0.434364 (-0.048001) | 0.437137 / 0.540337 (-0.103201) | 0.514779 / 1.386936 (-0.872157) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006245 / 0.011353 (-0.005108) | 0.004232 / 0.011008 (-0.006776) | 0.075682 / 0.038508 (0.037174) | 0.027858 / 0.023109 (0.004749) | 0.425325 / 0.275898 (0.149427) | 0.466732 / 0.323480 (0.143253) | 0.005240 / 0.007986 (-0.002745) | 0.003506 / 0.004328 (-0.000823) | 0.075294 / 0.004250 (0.071044) | 0.041677 / 0.037052 (0.004624) | 0.426552 / 0.258489 (0.168063) | 0.469452 / 0.293841 (0.175611) | 0.025443 / 0.128546 (-0.103104) | 0.008526 / 0.075646 (-0.067120) | 0.082190 / 0.419271 (-0.337081) | 0.040906 / 0.043533 (-0.002626) | 0.428406 / 0.255139 (0.173267) | 0.446795 / 0.283200 (0.163595) | 0.093837 / 0.141683 (-0.047846) | 1.518639 / 1.452155 (0.066484) | 1.620214 / 1.492716 (0.127498) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223259 / 0.018006 (0.205253) | 0.425077 / 0.000490 (0.424588) | 0.001980 / 0.000200 (0.001780) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025813 / 0.037411 (-0.011599) | 0.103062 / 0.014526 (0.088536) | 0.108958 / 0.176557 (-0.067598) | 0.161591 / 0.737135 (-0.575544) | 0.112130 / 0.296338 (-0.184209) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472843 / 0.215209 (0.257634) | 4.713281 / 2.077655 (2.635626) | 2.458216 / 1.504120 (0.954096) | 2.272467 / 1.541195 (0.731273) | 2.324456 / 1.468490 (0.855965) | 0.554686 / 4.584777 (-4.030091) | 3.445079 / 3.745712 (-0.300634) | 3.451896 / 5.269862 (-1.817966) | 1.431065 / 4.565676 (-3.134612) | 0.067868 / 0.424275 (-0.356407) | 0.012093 / 0.007607 (0.004486) | 0.573571 / 0.226044 (0.347526) | 5.820452 / 2.268929 (3.551523) | 2.934858 / 55.444624 (-52.509767) | 2.602719 / 6.876477 (-4.273758) | 2.645999 / 2.142072 (0.503927) | 0.660688 / 4.805227 (-4.144540) | 0.137490 / 6.500664 (-6.363174) | 0.068311 / 0.075469 (-0.007158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.321709 / 1.841788 (-0.520079) | 14.592346 / 8.074308 (6.518038) | 14.520748 / 10.191392 (4.329356) | 0.132689 / 0.680424 (-0.547735) | 0.016422 / 0.534201 (-0.517779) | 0.370071 / 0.579283 (-0.209212) | 0.397091 / 0.434364 (-0.037273) | 0.431979 / 0.540337 (-0.108358) | 0.509965 / 1.386936 (-0.876971) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8bcd061ab2082a0862f30329bc52f6e0d321805c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006182 / 0.011353 (-0.005171) | 0.004153 / 0.011008 (-0.006855) | 0.095715 / 0.038508 (0.057207) | 0.032457 / 0.023109 (0.009347) | 0.314961 / 0.275898 (0.039063) | 0.353696 / 0.323480 (0.030216) | 0.005256 / 0.007986 (-0.002729) | 0.004870 / 0.004328 (0.000541) | 0.072442 / 0.004250 (0.068192) | 0.046102 / 0.037052 (0.009050) | 0.324410 / 0.258489 (0.065921) | 0.366861 / 0.293841 (0.073020) | 0.027088 / 0.128546 (-0.101458) | 0.008572 / 0.075646 (-0.067075) | 0.325988 / 0.419271 (-0.093284) | 0.049494 / 0.043533 (0.005961) | 0.311221 / 0.255139 (0.056082) | 0.359720 / 0.283200 (0.076521) | 0.095101 / 0.141683 (-0.046581) | 1.472821 / 1.452155 (0.020667) | 1.516157 / 1.492716 (0.023441) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210456 / 0.018006 (0.192450) | 0.439440 / 0.000490 (0.438950) | 0.003764 / 0.000200 (0.003564) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024076 / 0.037411 (-0.013335) | 0.104886 / 0.014526 (0.090360) | 0.114164 / 0.176557 (-0.062393) | 0.167289 / 0.737135 (-0.569847) | 0.116457 / 0.296338 (-0.179882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400039 / 0.215209 (0.184830) | 3.973243 / 2.077655 (1.895588) | 1.801991 / 1.504120 (0.297871) | 1.592017 / 1.541195 (0.050822) | 1.612564 / 1.468490 (0.144074) | 0.527475 / 4.584777 (-4.057302) | 3.676246 / 3.745712 (-0.069466) | 1.806423 / 5.269862 (-3.463438) | 1.176921 / 4.565676 (-3.388756) | 0.065902 / 0.424275 (-0.358373) | 0.012245 / 0.007607 (0.004638) | 0.490883 / 0.226044 (0.264838) | 4.905270 / 2.268929 (2.636341) | 2.218694 / 55.444624 (-53.225930) | 1.903074 / 6.876477 (-4.973403) | 1.979505 / 2.142072 (-0.162567) | 0.644415 / 4.805227 (-4.160812) | 0.142433 / 6.500664 (-6.358231) | 0.063564 / 0.075469 (-0.011905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193756 / 1.841788 (-0.648032) | 14.673103 / 8.074308 (6.598795) | 13.410951 / 10.191392 (3.219559) | 0.159175 / 0.680424 (-0.521249) | 0.017076 / 0.534201 (-0.517125) | 0.388880 / 0.579283 (-0.190403) | 0.409974 / 0.434364 (-0.024390) | 0.454494 / 0.540337 (-0.085844) | 0.556873 / 1.386936 (-0.830063) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006107 / 0.011353 (-0.005246) | 0.004433 / 0.011008 (-0.006575) | 0.073892 / 0.038508 (0.035384) | 0.032386 / 0.023109 (0.009277) | 0.370339 / 0.275898 (0.094441) | 0.388996 / 0.323480 (0.065516) | 0.005438 / 0.007986 (-0.002548) | 0.003875 / 0.004328 (-0.000454) | 0.073867 / 0.004250 (0.069617) | 0.048350 / 0.037052 (0.011298) | 0.380328 / 0.258489 (0.121839) | 0.411373 / 0.293841 (0.117532) | 0.028183 / 0.128546 (-0.100363) | 0.008924 / 0.075646 (-0.066723) | 0.082484 / 0.419271 (-0.336787) | 0.047321 / 0.043533 (0.003788) | 0.371702 / 0.255139 (0.116563) | 0.380535 / 0.283200 (0.097335) | 0.100772 / 0.141683 (-0.040911) | 1.475038 / 1.452155 (0.022883) | 1.564293 / 1.492716 (0.071577) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214589 / 0.018006 (0.196583) | 0.437193 / 0.000490 (0.436703) | 0.003676 / 0.000200 (0.003476) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027991 / 0.037411 (-0.009421) | 0.111154 / 0.014526 (0.096628) | 0.120365 / 0.176557 (-0.056191) | 0.173601 / 0.737135 (-0.563535) | 0.126244 / 0.296338 (-0.170094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442848 / 0.215209 (0.227639) | 4.398336 / 2.077655 (2.320681) | 2.217058 / 1.504120 (0.712938) | 2.011155 / 1.541195 (0.469960) | 2.123086 / 1.468490 (0.654596) | 0.525857 / 4.584777 (-4.058920) | 3.730191 / 3.745712 (-0.015521) | 3.517680 / 5.269862 (-1.752181) | 1.557940 / 4.565676 (-3.007736) | 0.066309 / 0.424275 (-0.357967) | 0.011788 / 0.007607 (0.004181) | 0.548506 / 0.226044 (0.322462) | 5.483615 / 2.268929 (3.214687) | 2.663784 / 55.444624 (-52.780840) | 2.325744 / 6.876477 (-4.550732) | 2.344179 / 2.142072 (0.202106) | 0.644217 / 4.805227 (-4.161010) | 0.141546 / 6.500664 (-6.359118) | 0.063730 / 0.075469 (-0.011739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296032 / 1.841788 (-0.545756) | 14.903729 / 8.074308 (6.829421) | 14.505409 / 10.191392 (4.314017) | 0.170478 / 0.680424 (-0.509946) | 0.017876 / 0.534201 (-0.516325) | 0.401047 / 0.579283 (-0.178236) | 0.417855 / 0.434364 (-0.016509) | 0.472138 / 0.540337 (-0.068200) | 0.570859 / 1.386936 (-0.816077) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5a4d530965eb35c66955ef89df79210c66b7f5e6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008495 / 0.011353 (-0.002858) | 0.005322 / 0.011008 (-0.005686) | 0.125471 / 0.038508 (0.086962) | 0.034604 / 0.023109 (0.011495) | 0.419831 / 0.275898 (0.143933) | 0.415707 / 0.323480 (0.092227) | 0.007471 / 0.007986 (-0.000515) | 0.005441 / 0.004328 (0.001112) | 0.095412 / 0.004250 (0.091162) | 0.053865 / 0.037052 (0.016812) | 0.375257 / 0.258489 (0.116768) | 0.438114 / 0.293841 (0.144273) | 0.046183 / 0.128546 (-0.082363) | 0.013663 / 0.075646 (-0.061984) | 0.438317 / 0.419271 (0.019045) | 0.065665 / 0.043533 (0.022133) | 0.387640 / 0.255139 (0.132501) | 0.431350 / 0.283200 (0.148150) | 0.112841 / 0.141683 (-0.028842) | 1.778639 / 1.452155 (0.326484) | 1.891948 / 1.492716 (0.399232) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284371 / 0.018006 (0.266365) | 0.598247 / 0.000490 (0.597758) | 0.013674 / 0.000200 (0.013474) | 0.000483 / 0.000054 (0.000428) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032437 / 0.037411 (-0.004974) | 0.120547 / 0.014526 (0.106021) | 0.129845 / 0.176557 (-0.046711) | 0.203455 / 0.737135 (-0.533680) | 0.140039 / 0.296338 (-0.156300) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.596549 / 0.215209 (0.381340) | 6.138766 / 2.077655 (4.061111) | 2.515506 / 1.504120 (1.011386) | 2.124472 / 1.541195 (0.583277) | 2.160812 / 1.468490 (0.692322) | 0.898965 / 4.584777 (-3.685812) | 5.588152 / 3.745712 (1.842440) | 2.717580 / 5.269862 (-2.552282) | 1.683641 / 4.565676 (-2.882036) | 0.108045 / 0.424275 (-0.316230) | 0.014089 / 0.007607 (0.006481) | 0.749567 / 0.226044 (0.523523) | 7.518051 / 2.268929 (5.249123) | 3.198238 / 55.444624 (-52.246386) | 2.575156 / 6.876477 (-4.301321) | 2.725818 / 2.142072 (0.583745) | 1.149338 / 4.805227 (-3.655889) | 0.220443 / 6.500664 (-6.280221) | 0.081452 / 0.075469 (0.005983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624462 / 1.841788 (-0.217325) | 18.204963 / 8.074308 (10.130655) | 21.379169 / 10.191392 (11.187777) | 0.248520 / 0.680424 (-0.431903) | 0.030121 / 0.534201 (-0.504080) | 0.499542 / 0.579283 (-0.079741) | 0.599783 / 0.434364 (0.165419) | 0.597642 / 0.540337 (0.057305) | 0.681948 / 1.386936 (-0.704988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008431 / 0.011353 (-0.002921) | 0.006143 / 0.011008 (-0.004865) | 0.107531 / 0.038508 (0.069023) | 0.036308 / 0.023109 (0.013199) | 0.480555 / 0.275898 (0.204657) | 0.556407 / 0.323480 (0.232927) | 0.007614 / 0.007986 (-0.000372) | 0.004749 / 0.004328 (0.000421) | 0.105734 / 0.004250 (0.101484) | 0.051619 / 0.037052 (0.014567) | 0.514821 / 0.258489 (0.256332) | 0.562143 / 0.293841 (0.268302) | 0.042957 / 0.128546 (-0.085589) | 0.015142 / 0.075646 (-0.060505) | 0.143161 / 0.419271 (-0.276111) | 0.061910 / 0.043533 (0.018377) | 0.496923 / 0.255139 (0.241784) | 0.556302 / 0.283200 (0.273102) | 0.136700 / 0.141683 (-0.004983) | 1.886184 / 1.452155 (0.434029) | 2.004087 / 1.492716 (0.511371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235530 / 0.018006 (0.217523) | 0.600796 / 0.000490 (0.600306) | 0.009074 / 0.000200 (0.008874) | 0.000203 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036345 / 0.037411 (-0.001066) | 0.126112 / 0.014526 (0.111586) | 0.143369 / 0.176557 (-0.033188) | 0.211381 / 0.737135 (-0.525755) | 0.151095 / 0.296338 (-0.145243) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.695022 / 0.215209 (0.479813) | 6.685981 / 2.077655 (4.608326) | 3.104521 / 1.504120 (1.600401) | 2.758323 / 1.541195 (1.217128) | 2.706286 / 1.468490 (1.237796) | 0.941182 / 4.584777 (-3.643595) | 5.715839 / 3.745712 (1.970127) | 5.089636 / 5.269862 (-0.180226) | 2.594739 / 4.565676 (-1.970937) | 0.112621 / 0.424275 (-0.311655) | 0.014001 / 0.007607 (0.006394) | 0.812990 / 0.226044 (0.586945) | 8.060890 / 2.268929 (5.791961) | 3.832506 / 55.444624 (-51.612119) | 3.148051 / 6.876477 (-3.728425) | 3.110096 / 2.142072 (0.968023) | 1.105050 / 4.805227 (-3.700178) | 0.219835 / 6.500664 (-6.280829) | 0.078600 / 0.075469 (0.003131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.707551 / 1.841788 (-0.134237) | 19.238194 / 8.074308 (11.163885) | 22.167076 / 10.191392 (11.975684) | 0.233458 / 0.680424 (-0.446966) | 0.025131 / 0.534201 (-0.509070) | 0.525241 / 0.579283 (-0.054042) | 0.649666 / 0.434364 (0.215303) | 0.602941 / 0.540337 (0.062603) | 0.718472 / 1.386936 (-0.668464) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac3a42c525d91cb630273702a0c110a71c9bf54b \"CML watermark\")\n" ]
"2023-05-30T14:59:48Z"
"2023-05-30T18:03:10Z"
"2023-05-30T17:53:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5916.diff", "html_url": "https://github.com/huggingface/datasets/pull/5916", "merged_at": "2023-05-30T17:53:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5916.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5916" }
Fix #5906
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5916/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/974/comments
https://api.github.com/repos/huggingface/datasets/issues/974/events
https://github.com/huggingface/datasets/pull/974
754,811,185
MDExOlB1bGxSZXF1ZXN0NTMwNjQzNzQ3
974
Add MeTooMA Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4", "events_url": "https://api.github.com/users/akash418/events{/privacy}", "followers_url": "https://api.github.com/users/akash418/followers", "following_url": "https://api.github.com/users/akash418/following{/other_user}", "gists_url": "https://api.github.com/users/akash418/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akash418", "id": 23264033, "login": "akash418", "node_id": "MDQ6VXNlcjIzMjY0MDMz", "organizations_url": "https://api.github.com/users/akash418/orgs", "received_events_url": "https://api.github.com/users/akash418/received_events", "repos_url": "https://api.github.com/users/akash418/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akash418/subscriptions", "type": "User", "url": "https://api.github.com/users/akash418" }
[]
closed
false
null
[]
null
[]
"2020-12-01T23:44:01Z"
"2020-12-01T23:57:58Z"
"2020-12-01T23:57:58Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/974.diff", "html_url": "https://github.com/huggingface/datasets/pull/974", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/974.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/974" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/974/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/974/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5797/comments
https://api.github.com/repos/huggingface/datasets/issues/5797/events
https://github.com/huggingface/datasets/issues/5797
1,685,501,199
I_kwDODunzps5kdrUP
5,797
load_dataset is case sentitive?
{ "avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4", "events_url": "https://api.github.com/users/haonan-li/events{/privacy}", "followers_url": "https://api.github.com/users/haonan-li/followers", "following_url": "https://api.github.com/users/haonan-li/following{/other_user}", "gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/haonan-li", "id": 34729065, "login": "haonan-li", "node_id": "MDQ6VXNlcjM0NzI5MDY1", "organizations_url": "https://api.github.com/users/haonan-li/orgs", "received_events_url": "https://api.github.com/users/haonan-li/received_events", "repos_url": "https://api.github.com/users/haonan-li/repos", "site_admin": false, "starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions", "type": "User", "url": "https://api.github.com/users/haonan-li" }
[]
open
false
null
[]
null
[ "Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.", "I think `load_dataset(\"mbzuai/bactrian-x\")` shouldn't be loaded at all and raise an error but because of [this fallback](https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L1194) to packaged loaders when no other options are applicable, it loads the dataset with standard `json` loader instead of the custom loading script." ]
"2023-04-26T18:19:04Z"
"2023-04-27T11:56:58Z"
null
NONE
null
null
null
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, shell output: ```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx``` 2 will only download single subset, shell output ```Downloading and preparing dataset bactrian-x/en to xxx``` ### Environment info Python 3.10.11 datasets Version: 2.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5797/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/632/comments
https://api.github.com/repos/huggingface/datasets/issues/632/events
https://github.com/huggingface/datasets/pull/632
702,358,124
MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2
632
Fix typos in the loading datasets docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "thanks!" ]
"2020-09-16T00:27:41Z"
"2020-09-21T16:31:11Z"
"2020-09-16T06:52:44Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/632.diff", "html_url": "https://github.com/huggingface/datasets/pull/632", "merged_at": "2020-09-16T06:52:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/632.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/632" }
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/632/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5554/comments
https://api.github.com/repos/huggingface/datasets/issues/5554/events
https://github.com/huggingface/datasets/pull/5554
1,592,285,062
PR_kwDODunzps5KXhZh
5,554
Add resampy dep
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008735 / 0.011353 (-0.002618) | 0.004514 / 0.011008 (-0.006494) | 0.099348 / 0.038508 (0.060840) | 0.030060 / 0.023109 (0.006951) | 0.302189 / 0.275898 (0.026291) | 0.339535 / 0.323480 (0.016055) | 0.007053 / 0.007986 (-0.000933) | 0.003420 / 0.004328 (-0.000909) | 0.076967 / 0.004250 (0.072717) | 0.034484 / 0.037052 (-0.002568) | 0.304349 / 0.258489 (0.045860) | 0.354032 / 0.293841 (0.060191) | 0.033552 / 0.128546 (-0.094995) | 0.011405 / 0.075646 (-0.064241) | 0.324773 / 0.419271 (-0.094498) | 0.041103 / 0.043533 (-0.002429) | 0.313559 / 0.255139 (0.058420) | 0.333251 / 0.283200 (0.050052) | 0.087580 / 0.141683 (-0.054103) | 1.460324 / 1.452155 (0.008169) | 1.552239 / 1.492716 (0.059523) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183759 / 0.018006 (0.165753) | 0.413274 / 0.000490 (0.412784) | 0.001684 / 0.000200 (0.001484) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023341 / 0.037411 (-0.014071) | 0.098368 / 0.014526 (0.083842) | 0.105522 / 0.176557 (-0.071034) | 0.151581 / 0.737135 (-0.585554) | 0.108980 / 0.296338 (-0.187358) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417856 / 0.215209 (0.202647) | 4.167570 / 2.077655 (2.089915) | 1.843669 / 1.504120 (0.339549) | 1.643130 / 1.541195 (0.101936) | 1.717587 / 1.468490 (0.249097) | 0.696392 / 4.584777 (-3.888384) | 3.427617 / 3.745712 (-0.318096) | 2.816486 / 5.269862 (-2.453376) | 1.539519 / 4.565676 (-3.026157) | 0.082112 / 0.424275 (-0.342163) | 0.012425 / 0.007607 (0.004818) | 0.525325 / 0.226044 (0.299281) | 5.251710 / 2.268929 (2.982781) | 2.273641 / 55.444624 (-53.170983) | 1.931002 / 6.876477 (-4.945474) | 1.977253 / 2.142072 (-0.164819) | 0.804794 / 4.805227 (-4.000434) | 0.147324 / 6.500664 (-6.353340) | 0.064966 / 0.075469 (-0.010503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193173 / 1.841788 (-0.648615) | 13.705127 / 8.074308 (5.630819) | 14.348408 / 10.191392 (4.157016) | 0.165374 / 0.680424 (-0.515050) | 0.028288 / 0.534201 (-0.505913) | 0.402546 / 0.579283 (-0.176737) | 0.413503 / 0.434364 (-0.020861) | 0.473298 / 0.540337 (-0.067039) | 0.567571 / 1.386936 (-0.819365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006735 / 0.011353 (-0.004618) | 0.004601 / 0.011008 (-0.006407) | 0.077414 / 0.038508 (0.038906) | 0.027402 / 0.023109 (0.004293) | 0.353469 / 0.275898 (0.077571) | 0.381697 / 0.323480 (0.058218) | 0.005076 / 0.007986 (-0.002910) | 0.004665 / 0.004328 (0.000336) | 0.076210 / 0.004250 (0.071960) | 0.039114 / 0.037052 (0.002061) | 0.354980 / 0.258489 (0.096491) | 0.389648 / 0.293841 (0.095807) | 0.031674 / 0.128546 (-0.096872) | 0.011752 / 0.075646 (-0.063894) | 0.086330 / 0.419271 (-0.332942) | 0.041530 / 0.043533 (-0.002003) | 0.343002 / 0.255139 (0.087863) | 0.365959 / 0.283200 (0.082760) | 0.091848 / 0.141683 (-0.049835) | 1.519427 / 1.452155 (0.067272) | 1.591529 / 1.492716 (0.098813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216458 / 0.018006 (0.198452) | 0.403326 / 0.000490 (0.402836) | 0.000432 / 0.000200 (0.000232) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025106 / 0.037411 (-0.012305) | 0.101113 / 0.014526 (0.086588) | 0.108104 / 0.176557 (-0.068453) | 0.142342 / 0.737135 (-0.594794) | 0.112012 / 0.296338 (-0.184326) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443128 / 0.215209 (0.227919) | 4.434707 / 2.077655 (2.357052) | 2.115434 / 1.504120 (0.611315) | 1.902865 / 1.541195 (0.361670) | 1.996981 / 1.468490 (0.528491) | 0.702485 / 4.584777 (-3.882292) | 3.419151 / 3.745712 (-0.326561) | 1.911977 / 5.269862 (-3.357884) | 1.178195 / 4.565676 (-3.387481) | 0.082985 / 0.424275 (-0.341290) | 0.012415 / 0.007607 (0.004808) | 0.546188 / 0.226044 (0.320144) | 5.463592 / 2.268929 (3.194664) | 2.574911 / 55.444624 (-52.869713) | 2.232883 / 6.876477 (-4.643594) | 2.284391 / 2.142072 (0.142319) | 0.807389 / 4.805227 (-3.997839) | 0.151461 / 6.500664 (-6.349203) | 0.067831 / 0.075469 (-0.007638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286605 / 1.841788 (-0.555183) | 14.230328 / 8.074308 (6.156020) | 13.944645 / 10.191392 (3.753253) | 0.153725 / 0.680424 (-0.526699) | 0.016876 / 0.534201 (-0.517325) | 0.386109 / 0.579283 (-0.193174) | 0.401798 / 0.434364 (-0.032566) | 0.467883 / 0.540337 (-0.072454) | 0.557788 / 1.386936 (-0.829148) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c07f5c9268ce55d0e2022b018d5f44cfcedf1e43 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009305 / 0.011353 (-0.002048) | 0.004978 / 0.011008 (-0.006031) | 0.101687 / 0.038508 (0.063179) | 0.035339 / 0.023109 (0.012230) | 0.294770 / 0.275898 (0.018872) | 0.355491 / 0.323480 (0.032011) | 0.008183 / 0.007986 (0.000197) | 0.004076 / 0.004328 (-0.000253) | 0.077552 / 0.004250 (0.073302) | 0.042891 / 0.037052 (0.005838) | 0.305727 / 0.258489 (0.047238) | 0.336508 / 0.293841 (0.042667) | 0.038525 / 0.128546 (-0.090022) | 0.011878 / 0.075646 (-0.063768) | 0.334136 / 0.419271 (-0.085136) | 0.047548 / 0.043533 (0.004015) | 0.301749 / 0.255139 (0.046610) | 0.318221 / 0.283200 (0.035022) | 0.099172 / 0.141683 (-0.042511) | 1.440638 / 1.452155 (-0.011516) | 1.503505 / 1.492716 (0.010789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202748 / 0.018006 (0.184742) | 0.433670 / 0.000490 (0.433181) | 0.003139 / 0.000200 (0.002939) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025555 / 0.037411 (-0.011856) | 0.107156 / 0.014526 (0.092631) | 0.116706 / 0.176557 (-0.059851) | 0.153165 / 0.737135 (-0.583970) | 0.122614 / 0.296338 (-0.173724) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398912 / 0.215209 (0.183703) | 3.965048 / 2.077655 (1.887394) | 1.894678 / 1.504120 (0.390558) | 1.706925 / 1.541195 (0.165730) | 1.745264 / 1.468490 (0.276774) | 0.691174 / 4.584777 (-3.893603) | 3.824583 / 3.745712 (0.078871) | 3.876806 / 5.269862 (-1.393055) | 1.898991 / 4.565676 (-2.666685) | 0.083687 / 0.424275 (-0.340588) | 0.012122 / 0.007607 (0.004514) | 0.510870 / 0.226044 (0.284825) | 5.094523 / 2.268929 (2.825594) | 2.265557 / 55.444624 (-53.179067) | 1.930882 / 6.876477 (-4.945594) | 2.016090 / 2.142072 (-0.125983) | 0.833108 / 4.805227 (-3.972119) | 0.164804 / 6.500664 (-6.335860) | 0.062864 / 0.075469 (-0.012605) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192673 / 1.841788 (-0.649115) | 14.730393 / 8.074308 (6.656085) | 14.550736 / 10.191392 (4.359344) | 0.154451 / 0.680424 (-0.525973) | 0.029222 / 0.534201 (-0.504979) | 0.440939 / 0.579283 (-0.138345) | 0.442772 / 0.434364 (0.008409) | 0.543948 / 0.540337 (0.003610) | 0.638113 / 1.386936 (-0.748824) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007589 / 0.011353 (-0.003764) | 0.005208 / 0.011008 (-0.005800) | 0.073797 / 0.038508 (0.035289) | 0.034021 / 0.023109 (0.010912) | 0.366120 / 0.275898 (0.090222) | 0.397105 / 0.323480 (0.073625) | 0.005837 / 0.007986 (-0.002148) | 0.004028 / 0.004328 (-0.000301) | 0.073502 / 0.004250 (0.069252) | 0.051233 / 0.037052 (0.014181) | 0.359849 / 0.258489 (0.101360) | 0.397476 / 0.293841 (0.103635) | 0.036727 / 0.128546 (-0.091819) | 0.012249 / 0.075646 (-0.063397) | 0.086600 / 0.419271 (-0.332671) | 0.051156 / 0.043533 (0.007623) | 0.343441 / 0.255139 (0.088302) | 0.389672 / 0.283200 (0.106472) | 0.105180 / 0.141683 (-0.036503) | 1.439719 / 1.452155 (-0.012435) | 1.537779 / 1.492716 (0.045062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199429 / 0.018006 (0.181422) | 0.440837 / 0.000490 (0.440347) | 0.005333 / 0.000200 (0.005133) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029581 / 0.037411 (-0.007830) | 0.113789 / 0.014526 (0.099263) | 0.123799 / 0.176557 (-0.052758) | 0.163772 / 0.737135 (-0.573363) | 0.127156 / 0.296338 (-0.169183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422803 / 0.215209 (0.207594) | 4.192400 / 2.077655 (2.114745) | 1.994561 / 1.504120 (0.490441) | 1.807085 / 1.541195 (0.265890) | 1.927539 / 1.468490 (0.459049) | 0.708804 / 4.584777 (-3.875973) | 3.790662 / 3.745712 (0.044950) | 3.667207 / 5.269862 (-1.602655) | 1.985107 / 4.565676 (-2.580570) | 0.086609 / 0.424275 (-0.337666) | 0.012613 / 0.007607 (0.005006) | 0.520167 / 0.226044 (0.294122) | 5.208657 / 2.268929 (2.939729) | 2.500383 / 55.444624 (-52.944241) | 2.129817 / 6.876477 (-4.746660) | 2.181205 / 2.142072 (0.039133) | 0.847925 / 4.805227 (-3.957303) | 0.168293 / 6.500664 (-6.332372) | 0.065066 / 0.075469 (-0.010403) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261053 / 1.841788 (-0.580735) | 15.091644 / 8.074308 (7.017336) | 14.126139 / 10.191392 (3.934747) | 0.184956 / 0.680424 (-0.495468) | 0.017909 / 0.534201 (-0.516292) | 0.428918 / 0.579283 (-0.150365) | 0.429637 / 0.434364 (-0.004727) | 0.530900 / 0.540337 (-0.009437) | 0.627966 / 1.386936 (-0.758970) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a72fd153d3499a5c5eda783673073c9f557f11e0 \"CML watermark\")\n", "I think we should also suggest installing `resampy` in the error message thrown by the Audio feature when `librosa` is not installed.", "exploring a better solution at https://github.com/huggingface/datasets/pull/5556" ]
"2023-02-20T18:15:43Z"
"2023-09-24T10:07:29Z"
"2023-02-21T12:43:38Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5554.diff", "html_url": "https://github.com/huggingface/datasets/pull/5554", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5554" }
In librosa 0.10 they removed the `resmpy` dependency and set it to optional. However it is necessary for resampling. I added it to the "audio" extra dependencies.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5554/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4223/comments
https://api.github.com/repos/huggingface/datasets/issues/4223/events
https://github.com/huggingface/datasets/pull/4223
1,216,107,082
PR_kwDODunzps42z0YV
4,223
Add Accuracy Metric Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emibaylor", "id": 27527747, "login": "emibaylor", "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "repos_url": "https://api.github.com/users/emibaylor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "type": "User", "url": "https://api.github.com/users/emibaylor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-04-26T15:10:46Z"
"2022-05-03T14:27:45Z"
"2022-05-03T14:20:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4223.diff", "html_url": "https://github.com/huggingface/datasets/pull/4223", "merged_at": "2022-05-03T14:20:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4223.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4223" }
- adds accuracy metric card - updates docstring in accuracy.py - adds .json file with metric card and docstring information
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4223/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4223/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1624/comments
https://api.github.com/repos/huggingface/datasets/issues/1624/events
https://github.com/huggingface/datasets/issues/1624
773,669,700
MDU6SXNzdWU3NzM2Njk3MDA=
1,624
Cannot download ade_corpus_v2
{ "avatar_url": "https://avatars.githubusercontent.com/u/20259310?v=4", "events_url": "https://api.github.com/users/him1411/events{/privacy}", "followers_url": "https://api.github.com/users/him1411/followers", "following_url": "https://api.github.com/users/him1411/following{/other_user}", "gists_url": "https://api.github.com/users/him1411/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/him1411", "id": 20259310, "login": "him1411", "node_id": "MDQ6VXNlcjIwMjU5MzEw", "organizations_url": "https://api.github.com/users/him1411/orgs", "received_events_url": "https://api.github.com/users/him1411/received_events", "repos_url": "https://api.github.com/users/him1411/repos", "site_admin": false, "starred_url": "https://api.github.com/users/him1411/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/him1411/subscriptions", "type": "User", "url": "https://api.github.com/users/him1411" }
[]
closed
false
null
[]
null
[ "Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`", "`ade_corpus_v2` was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `ade_corpus_v2` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"ade_corpus_v2\", \"Ade_corpos_v2_drug_ade_relation\")\r\n```\r\n\r\n(looks like there is a typo in the configuration name, we'll fix it for the v2.0 release of `datasets` soon)" ]
"2020-12-23T10:58:14Z"
"2021-08-03T05:08:54Z"
"2021-08-03T05:08:54Z"
NONE
null
null
null
I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2 but received this error : `Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module combined_path, github_file_path, file_path FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1624/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3183/comments
https://api.github.com/repos/huggingface/datasets/issues/3183/events
https://github.com/huggingface/datasets/pull/3183
1,039,761,120
PR_kwDODunzps4t3Dag
3,183
Add missing docstring to DownloadConfig
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-10-29T16:56:35Z"
"2021-11-02T10:25:38Z"
"2021-11-02T10:25:37Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3183.diff", "html_url": "https://github.com/huggingface/datasets/pull/3183", "merged_at": "2021-11-02T10:25:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3183.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3183" }
Document the `use_etag` and `num_proc` attributes in `DownloadConig`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3183/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3413/comments
https://api.github.com/repos/huggingface/datasets/issues/3413/events
https://github.com/huggingface/datasets/pull/3413
1,075,854,325
PR_kwDODunzps4voNZv
3,413
Add WIDER FACE dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-12-09T18:03:38Z"
"2022-01-12T14:13:47Z"
"2022-01-12T14:13:47Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3413.diff", "html_url": "https://github.com/huggingface/datasets/pull/3413", "merged_at": "2022-01-12T14:13:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/3413.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3413" }
Adds the WIDER FACE face detection benchmark. TODOs: * [x] dataset card * [x] dummy data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3413/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3413/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5997/comments
https://api.github.com/repos/huggingface/datasets/issues/5997/events
https://github.com/huggingface/datasets/issues/5997
1,781,582,818
I_kwDODunzps5qMMvi
5,997
extend the map function so it can wrap around long text that does not fit in the context window
{ "avatar_url": "https://avatars.githubusercontent.com/u/127623723?v=4", "events_url": "https://api.github.com/users/siddhsql/events{/privacy}", "followers_url": "https://api.github.com/users/siddhsql/followers", "following_url": "https://api.github.com/users/siddhsql/following{/other_user}", "gists_url": "https://api.github.com/users/siddhsql/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/siddhsql", "id": 127623723, "login": "siddhsql", "node_id": "U_kgDOB5tiKw", "organizations_url": "https://api.github.com/users/siddhsql/orgs", "received_events_url": "https://api.github.com/users/siddhsql/received_events", "repos_url": "https://api.github.com/users/siddhsql/repos", "site_admin": false, "starred_url": "https://api.github.com/users/siddhsql/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddhsql/subscriptions", "type": "User", "url": "https://api.github.com/users/siddhsql" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.", "All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding/transforming the input columns to the tokenizer output's length (447). " ]
"2023-06-29T22:15:21Z"
"2023-07-03T17:58:52Z"
null
NONE
null
null
null
### Feature request I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530): ``` data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) ``` but running the code gives me this error: ``` File "/llm/fine-tune.py", line 117, in <module> data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single writer.write_batch(batch) File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447 ``` The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it. ### Motivation please see above ### Your contribution I'm afraid I don't have much knowledge to help
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5997/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3046/comments
https://api.github.com/repos/huggingface/datasets/issues/3046/events
https://github.com/huggingface/datasets/pull/3046
1,021,021,368
PR_kwDODunzps4s8MjS
3,046
Fix MedDialog metadata JSON
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-10-08T12:04:40Z"
"2021-10-11T07:46:43Z"
"2021-10-11T07:46:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3046.diff", "html_url": "https://github.com/huggingface/datasets/pull/3046", "merged_at": "2021-10-11T07:46:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/3046.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3046" }
Fix #2969.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3046/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5010/comments
https://api.github.com/repos/huggingface/datasets/issues/5010/events
https://github.com/huggingface/datasets/pull/5010
1,382,308,799
PR_kwDODunzps4_bB3q
5,010
Add deprecation warning to multilingual_librispeech dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-22T11:41:59Z"
"2022-09-23T12:04:37Z"
"2022-09-23T12:02:45Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5010.diff", "html_url": "https://github.com/huggingface/datasets/pull/5010", "merged_at": "2022-09-23T12:02:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/5010.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5010" }
Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well. The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag. Related to: - #4060
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5010/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5010/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6072/comments
https://api.github.com/repos/huggingface/datasets/issues/6072/events
https://github.com/huggingface/datasets/pull/6072
1,822,123,560
PR_kwDODunzps5WbWFN
6,072
Fix fsspec storage_options from load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007617 / 0.011353 (-0.003736) | 0.004580 / 0.011008 (-0.006428) | 0.100913 / 0.038508 (0.062405) | 0.087703 / 0.023109 (0.064594) | 0.424159 / 0.275898 (0.148261) | 0.467195 / 0.323480 (0.143715) | 0.006890 / 0.007986 (-0.001096) | 0.003765 / 0.004328 (-0.000564) | 0.077513 / 0.004250 (0.073262) | 0.064889 / 0.037052 (0.027837) | 0.422349 / 0.258489 (0.163860) | 0.477391 / 0.293841 (0.183550) | 0.036025 / 0.128546 (-0.092522) | 0.009939 / 0.075646 (-0.065707) | 0.342409 / 0.419271 (-0.076862) | 0.061568 / 0.043533 (0.018035) | 0.431070 / 0.255139 (0.175931) | 0.462008 / 0.283200 (0.178809) | 0.027480 / 0.141683 (-0.114203) | 1.802271 / 1.452155 (0.350116) | 1.861336 / 1.492716 (0.368620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255806 / 0.018006 (0.237800) | 0.507969 / 0.000490 (0.507479) | 0.010060 / 0.000200 (0.009860) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032286 / 0.037411 (-0.005125) | 0.104468 / 0.014526 (0.089942) | 0.112707 / 0.176557 (-0.063850) | 0.181285 / 0.737135 (-0.555850) | 0.113180 / 0.296338 (-0.183158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449265 / 0.215209 (0.234056) | 4.465941 / 2.077655 (2.388287) | 2.177889 / 1.504120 (0.673769) | 1.969864 / 1.541195 (0.428669) | 2.077502 / 1.468490 (0.609011) | 0.561607 / 4.584777 (-4.023170) | 4.281873 / 3.745712 (0.536161) | 4.975352 / 5.269862 (-0.294510) | 2.907121 / 4.565676 (-1.658555) | 0.070205 / 0.424275 (-0.354070) | 0.009164 / 0.007607 (0.001557) | 0.581921 / 0.226044 (0.355876) | 5.538667 / 2.268929 (3.269739) | 2.798853 / 55.444624 (-52.645771) | 2.314015 / 6.876477 (-4.562462) | 2.584836 / 2.142072 (0.442763) | 0.672333 / 4.805227 (-4.132894) | 0.153828 / 6.500664 (-6.346836) | 0.069757 / 0.075469 (-0.005712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559670 / 1.841788 (-0.282118) | 23.994639 / 8.074308 (15.920331) | 16.856160 / 10.191392 (6.664768) | 0.195555 / 0.680424 (-0.484869) | 0.021586 / 0.534201 (-0.512615) | 0.469295 / 0.579283 (-0.109989) | 0.481582 / 0.434364 (0.047218) | 0.588667 / 0.540337 (0.048329) | 0.734347 / 1.386936 (-0.652589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009614 / 0.011353 (-0.001739) | 0.004616 / 0.011008 (-0.006392) | 0.077223 / 0.038508 (0.038715) | 0.103074 / 0.023109 (0.079965) | 0.447834 / 0.275898 (0.171936) | 0.524696 / 0.323480 (0.201216) | 0.007120 / 0.007986 (-0.000866) | 0.003890 / 0.004328 (-0.000438) | 0.076406 / 0.004250 (0.072156) | 0.073488 / 0.037052 (0.036436) | 0.466221 / 0.258489 (0.207732) | 0.532206 / 0.293841 (0.238365) | 0.037596 / 0.128546 (-0.090950) | 0.010029 / 0.075646 (-0.065617) | 0.084313 / 0.419271 (-0.334959) | 0.060088 / 0.043533 (0.016555) | 0.437792 / 0.255139 (0.182653) | 0.512850 / 0.283200 (0.229650) | 0.032424 / 0.141683 (-0.109259) | 1.762130 / 1.452155 (0.309975) | 1.946097 / 1.492716 (0.453381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250774 / 0.018006 (0.232768) | 0.506869 / 0.000490 (0.506379) | 0.008232 / 0.000200 (0.008032) | 0.000164 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037779 / 0.037411 (0.000368) | 0.111933 / 0.014526 (0.097407) | 0.122385 / 0.176557 (-0.054172) | 0.190372 / 0.737135 (-0.546763) | 0.122472 / 0.296338 (-0.173866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488502 / 0.215209 (0.273293) | 4.878114 / 2.077655 (2.800459) | 2.504144 / 1.504120 (1.000024) | 2.321077 / 1.541195 (0.779883) | 2.416797 / 1.468490 (0.948307) | 0.583582 / 4.584777 (-4.001195) | 4.277896 / 3.745712 (0.532184) | 3.874780 / 5.269862 (-1.395082) | 2.540099 / 4.565676 (-2.025577) | 0.068734 / 0.424275 (-0.355541) | 0.009158 / 0.007607 (0.001550) | 0.578401 / 0.226044 (0.352357) | 5.763354 / 2.268929 (3.494426) | 3.167771 / 55.444624 (-52.276853) | 2.675220 / 6.876477 (-4.201257) | 2.920927 / 2.142072 (0.778855) | 0.673948 / 4.805227 (-4.131280) | 0.157908 / 6.500664 (-6.342756) | 0.071672 / 0.075469 (-0.003797) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.635120 / 1.841788 (-0.206668) | 24.853480 / 8.074308 (16.779172) | 17.162978 / 10.191392 (6.971586) | 0.209577 / 0.680424 (-0.470847) | 0.030110 / 0.534201 (-0.504091) | 0.546970 / 0.579283 (-0.032313) | 0.581912 / 0.434364 (0.147548) | 0.571460 / 0.540337 (0.031123) | 0.823411 / 1.386936 (-0.563525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#83b792dddd074ccd007c407f942f6870aac7ee84 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006674 / 0.011353 (-0.004679) | 0.004198 / 0.011008 (-0.006810) | 0.084859 / 0.038508 (0.046351) | 0.076065 / 0.023109 (0.052955) | 0.316065 / 0.275898 (0.040167) | 0.352097 / 0.323480 (0.028617) | 0.005610 / 0.007986 (-0.002376) | 0.003600 / 0.004328 (-0.000729) | 0.064921 / 0.004250 (0.060671) | 0.054493 / 0.037052 (0.017441) | 0.318125 / 0.258489 (0.059636) | 0.370183 / 0.293841 (0.076342) | 0.031141 / 0.128546 (-0.097405) | 0.008755 / 0.075646 (-0.066891) | 0.288241 / 0.419271 (-0.131030) | 0.052379 / 0.043533 (0.008846) | 0.328147 / 0.255139 (0.073008) | 0.347548 / 0.283200 (0.064348) | 0.024393 / 0.141683 (-0.117290) | 1.480646 / 1.452155 (0.028492) | 1.575867 / 1.492716 (0.083151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268978 / 0.018006 (0.250971) | 0.586470 / 0.000490 (0.585980) | 0.003190 / 0.000200 (0.002990) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030595 / 0.037411 (-0.006816) | 0.083037 / 0.014526 (0.068511) | 0.103706 / 0.176557 (-0.072850) | 0.164104 / 0.737135 (-0.573031) | 0.104536 / 0.296338 (-0.191802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382274 / 0.215209 (0.167065) | 3.811878 / 2.077655 (1.734223) | 1.840098 / 1.504120 (0.335978) | 1.670949 / 1.541195 (0.129754) | 1.763755 / 1.468490 (0.295264) | 0.479526 / 4.584777 (-4.105251) | 3.544443 / 3.745712 (-0.201269) | 3.263004 / 5.269862 (-2.006858) | 2.092801 / 4.565676 (-2.472875) | 0.057167 / 0.424275 (-0.367108) | 0.007450 / 0.007607 (-0.000157) | 0.463731 / 0.226044 (0.237686) | 4.624630 / 2.268929 (2.355701) | 2.327078 / 55.444624 (-53.117546) | 1.977734 / 6.876477 (-4.898743) | 2.237152 / 2.142072 (0.095079) | 0.573210 / 4.805227 (-4.232018) | 0.132095 / 6.500664 (-6.368569) | 0.060283 / 0.075469 (-0.015186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243404 / 1.841788 (-0.598384) | 20.306778 / 8.074308 (12.232470) | 14.561660 / 10.191392 (4.370268) | 0.170826 / 0.680424 (-0.509598) | 0.018574 / 0.534201 (-0.515627) | 0.392367 / 0.579283 (-0.186916) | 0.402918 / 0.434364 (-0.031446) | 0.476629 / 0.540337 (-0.063708) | 0.653709 / 1.386936 (-0.733227) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004092 / 0.011008 (-0.006916) | 0.065951 / 0.038508 (0.027443) | 0.078090 / 0.023109 (0.054981) | 0.369679 / 0.275898 (0.093781) | 0.411442 / 0.323480 (0.087962) | 0.005646 / 0.007986 (-0.002339) | 0.003537 / 0.004328 (-0.000791) | 0.066024 / 0.004250 (0.061773) | 0.058947 / 0.037052 (0.021895) | 0.389219 / 0.258489 (0.130730) | 0.414200 / 0.293841 (0.120359) | 0.030372 / 0.128546 (-0.098174) | 0.008631 / 0.075646 (-0.067015) | 0.071692 / 0.419271 (-0.347580) | 0.048035 / 0.043533 (0.004502) | 0.376960 / 0.255139 (0.121821) | 0.389847 / 0.283200 (0.106648) | 0.023940 / 0.141683 (-0.117743) | 1.487633 / 1.452155 (0.035479) | 1.561680 / 1.492716 (0.068964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301467 / 0.018006 (0.283461) | 0.544159 / 0.000490 (0.543669) | 0.000408 / 0.000200 (0.000208) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030939 / 0.037411 (-0.006472) | 0.087432 / 0.014526 (0.072906) | 0.103263 / 0.176557 (-0.073293) | 0.154551 / 0.737135 (-0.582585) | 0.104631 / 0.296338 (-0.191707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422348 / 0.215209 (0.207139) | 4.206003 / 2.077655 (2.128348) | 2.212619 / 1.504120 (0.708499) | 2.049616 / 1.541195 (0.508421) | 2.139093 / 1.468490 (0.670603) | 0.489647 / 4.584777 (-4.095130) | 3.523291 / 3.745712 (-0.222422) | 3.277657 / 5.269862 (-1.992205) | 2.111353 / 4.565676 (-2.454324) | 0.057597 / 0.424275 (-0.366679) | 0.007675 / 0.007607 (0.000068) | 0.493068 / 0.226044 (0.267023) | 4.939493 / 2.268929 (2.670565) | 2.695995 / 55.444624 (-52.748630) | 2.374904 / 6.876477 (-4.501573) | 2.600110 / 2.142072 (0.458038) | 0.586306 / 4.805227 (-4.218921) | 0.134137 / 6.500664 (-6.366527) | 0.061897 / 0.075469 (-0.013572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330628 / 1.841788 (-0.511160) | 20.557964 / 8.074308 (12.483656) | 14.251632 / 10.191392 (4.060240) | 0.148772 / 0.680424 (-0.531652) | 0.018383 / 0.534201 (-0.515817) | 0.392552 / 0.579283 (-0.186731) | 0.403959 / 0.434364 (-0.030405) | 0.462154 / 0.540337 (-0.078184) | 0.608832 / 1.386936 (-0.778104) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7a291b2b659a356199dff0ab004ad3845459034b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007659 / 0.011353 (-0.003694) | 0.004500 / 0.011008 (-0.006508) | 0.100379 / 0.038508 (0.061871) | 0.079731 / 0.023109 (0.056622) | 0.381788 / 0.275898 (0.105890) | 0.416524 / 0.323480 (0.093044) | 0.004446 / 0.007986 (-0.003539) | 0.003752 / 0.004328 (-0.000577) | 0.074956 / 0.004250 (0.070706) | 0.062885 / 0.037052 (0.025832) | 0.383849 / 0.258489 (0.125360) | 0.433906 / 0.293841 (0.140065) | 0.036079 / 0.128546 (-0.092468) | 0.009927 / 0.075646 (-0.065719) | 0.343879 / 0.419271 (-0.075393) | 0.061055 / 0.043533 (0.017523) | 0.376703 / 0.255139 (0.121564) | 0.428111 / 0.283200 (0.144911) | 0.028667 / 0.141683 (-0.113016) | 1.777755 / 1.452155 (0.325600) | 1.878283 / 1.492716 (0.385567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220829 / 0.018006 (0.202823) | 0.506406 / 0.000490 (0.505916) | 0.005550 / 0.000200 (0.005350) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034928 / 0.037411 (-0.002483) | 0.103873 / 0.014526 (0.089347) | 0.114352 / 0.176557 (-0.062204) | 0.188218 / 0.737135 (-0.548918) | 0.117343 / 0.296338 (-0.178995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459148 / 0.215209 (0.243939) | 4.582092 / 2.077655 (2.504437) | 2.275603 / 1.504120 (0.771483) | 2.058155 / 1.541195 (0.516960) | 2.163886 / 1.468490 (0.695396) | 0.573033 / 4.584777 (-4.011744) | 4.414891 / 3.745712 (0.669178) | 7.280433 / 5.269862 (2.010572) | 4.119414 / 4.565676 (-0.446262) | 0.067432 / 0.424275 (-0.356843) | 0.008687 / 0.007607 (0.001080) | 0.556029 / 0.226044 (0.329984) | 5.557192 / 2.268929 (3.288264) | 2.921596 / 55.444624 (-52.523028) | 2.520249 / 6.876477 (-4.356228) | 2.778965 / 2.142072 (0.636893) | 0.684765 / 4.805227 (-4.120462) | 0.159228 / 6.500664 (-6.341436) | 0.074015 / 0.075469 (-0.001454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.534470 / 1.841788 (-0.307318) | 23.630693 / 8.074308 (15.556385) | 17.058142 / 10.191392 (6.866750) | 0.200909 / 0.680424 (-0.479515) | 0.021637 / 0.534201 (-0.512564) | 0.467417 / 0.579283 (-0.111866) | 0.460456 / 0.434364 (0.026092) | 0.541131 / 0.540337 (0.000793) | 0.728560 / 1.386936 (-0.658376) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007625 / 0.011353 (-0.003727) | 0.004495 / 0.011008 (-0.006513) | 0.076373 / 0.038508 (0.037865) | 0.085260 / 0.023109 (0.062151) | 0.475778 / 0.275898 (0.199880) | 0.504604 / 0.323480 (0.181124) | 0.006733 / 0.007986 (-0.001253) | 0.003751 / 0.004328 (-0.000578) | 0.074993 / 0.004250 (0.070743) | 0.064704 / 0.037052 (0.027652) | 0.490072 / 0.258489 (0.231583) | 0.507560 / 0.293841 (0.213719) | 0.036765 / 0.128546 (-0.091781) | 0.009955 / 0.075646 (-0.065692) | 0.082452 / 0.419271 (-0.336820) | 0.057131 / 0.043533 (0.013598) | 0.467664 / 0.255139 (0.212525) | 0.482143 / 0.283200 (0.198943) | 0.025396 / 0.141683 (-0.116287) | 1.807587 / 1.452155 (0.355433) | 1.853355 / 1.492716 (0.360639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250543 / 0.018006 (0.232537) | 0.495685 / 0.000490 (0.495196) | 0.000415 / 0.000200 (0.000215) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035795 / 0.037411 (-0.001616) | 0.105954 / 0.014526 (0.091428) | 0.120158 / 0.176557 (-0.056399) | 0.181714 / 0.737135 (-0.555422) | 0.121242 / 0.296338 (-0.175097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488241 / 0.215209 (0.273032) | 4.866916 / 2.077655 (2.789262) | 2.531530 / 1.504120 (1.027410) | 2.360642 / 1.541195 (0.819448) | 2.457320 / 1.468490 (0.988830) | 0.571224 / 4.584777 (-4.013553) | 4.339042 / 3.745712 (0.593330) | 3.672812 / 5.269862 (-1.597050) | 2.364535 / 4.565676 (-2.201142) | 0.067004 / 0.424275 (-0.357271) | 0.009019 / 0.007607 (0.001412) | 0.563751 / 0.226044 (0.337707) | 5.664917 / 2.268929 (3.395989) | 3.043316 / 55.444624 (-52.401308) | 2.682722 / 6.876477 (-4.193755) | 2.863482 / 2.142072 (0.721409) | 0.666171 / 4.805227 (-4.139056) | 0.151862 / 6.500664 (-6.348802) | 0.071199 / 0.075469 (-0.004271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601880 / 1.841788 (-0.239907) | 23.069073 / 8.074308 (14.994765) | 16.918377 / 10.191392 (6.726985) | 0.173614 / 0.680424 (-0.506810) | 0.021843 / 0.534201 (-0.512358) | 0.470531 / 0.579283 (-0.108753) | 0.471152 / 0.434364 (0.036788) | 0.550968 / 0.540337 (0.010631) | 0.718869 / 1.386936 (-0.668067) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9e6eea46fc9503765c125395e30e26c1ae2e084 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007530 / 0.011353 (-0.003823) | 0.004151 / 0.011008 (-0.006858) | 0.098490 / 0.038508 (0.059982) | 0.086955 / 0.023109 (0.063846) | 0.362133 / 0.275898 (0.086235) | 0.391402 / 0.323480 (0.067922) | 0.006274 / 0.007986 (-0.001712) | 0.003711 / 0.004328 (-0.000618) | 0.073519 / 0.004250 (0.069269) | 0.066170 / 0.037052 (0.029118) | 0.379057 / 0.258489 (0.120568) | 0.398132 / 0.293841 (0.104291) | 0.033936 / 0.128546 (-0.094610) | 0.009977 / 0.075646 (-0.065670) | 0.323766 / 0.419271 (-0.095505) | 0.078615 / 0.043533 (0.035082) | 0.352403 / 0.255139 (0.097264) | 0.386607 / 0.283200 (0.103407) | 0.036579 / 0.141683 (-0.105103) | 1.691899 / 1.452155 (0.239745) | 1.819396 / 1.492716 (0.326680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216888 / 0.018006 (0.198882) | 0.465781 / 0.000490 (0.465291) | 0.006197 / 0.000200 (0.005997) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032870 / 0.037411 (-0.004542) | 0.096026 / 0.014526 (0.081500) | 0.111093 / 0.176557 (-0.065464) | 0.185982 / 0.737135 (-0.551154) | 0.106967 / 0.296338 (-0.189371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441567 / 0.215209 (0.226358) | 4.353813 / 2.077655 (2.276158) | 2.176034 / 1.504120 (0.671914) | 1.969631 / 1.541195 (0.428437) | 2.048821 / 1.468490 (0.580330) | 0.549144 / 4.584777 (-4.035633) | 4.016166 / 3.745712 (0.270453) | 3.764249 / 5.269862 (-1.505613) | 2.293995 / 4.565676 (-2.271681) | 0.065227 / 0.424275 (-0.359048) | 0.008303 / 0.007607 (0.000695) | 0.513783 / 0.226044 (0.287738) | 5.247617 / 2.268929 (2.978689) | 2.782114 / 55.444624 (-52.662510) | 2.342776 / 6.876477 (-4.533701) | 2.621569 / 2.142072 (0.479497) | 0.679336 / 4.805227 (-4.125891) | 0.152061 / 6.500664 (-6.348603) | 0.070294 / 0.075469 (-0.005175) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.471778 / 1.841788 (-0.370010) | 22.714904 / 8.074308 (14.640596) | 15.607991 / 10.191392 (5.416599) | 0.172592 / 0.680424 (-0.507832) | 0.021799 / 0.534201 (-0.512402) | 0.462740 / 0.579283 (-0.116543) | 0.490885 / 0.434364 (0.056521) | 0.552997 / 0.540337 (0.012660) | 0.763784 / 1.386936 (-0.623152) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007466 / 0.011353 (-0.003886) | 0.004322 / 0.011008 (-0.006686) | 0.074331 / 0.038508 (0.035823) | 0.085315 / 0.023109 (0.062206) | 0.409284 / 0.275898 (0.133386) | 0.464584 / 0.323480 (0.141104) | 0.005651 / 0.007986 (-0.002335) | 0.003577 / 0.004328 (-0.000751) | 0.070250 / 0.004250 (0.066000) | 0.059780 / 0.037052 (0.022727) | 0.419668 / 0.258489 (0.161179) | 0.462984 / 0.293841 (0.169143) | 0.034159 / 0.128546 (-0.094387) | 0.008999 / 0.075646 (-0.066647) | 0.076302 / 0.419271 (-0.342969) | 0.052274 / 0.043533 (0.008741) | 0.425938 / 0.255139 (0.170799) | 0.430399 / 0.283200 (0.147200) | 0.025017 / 0.141683 (-0.116666) | 1.680697 / 1.452155 (0.228542) | 1.774677 / 1.492716 (0.281960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291514 / 0.018006 (0.273508) | 0.461175 / 0.000490 (0.460685) | 0.023061 / 0.000200 (0.022861) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033950 / 0.037411 (-0.003462) | 0.100032 / 0.014526 (0.085506) | 0.118308 / 0.176557 (-0.058249) | 0.183601 / 0.737135 (-0.553535) | 0.116936 / 0.296338 (-0.179402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478779 / 0.215209 (0.263570) | 4.709505 / 2.077655 (2.631850) | 2.457442 / 1.504120 (0.953322) | 2.213737 / 1.541195 (0.672542) | 2.340642 / 1.468490 (0.872152) | 0.567187 / 4.584777 (-4.017590) | 3.923061 / 3.745712 (0.177349) | 3.752989 / 5.269862 (-1.516873) | 2.324028 / 4.565676 (-2.241649) | 0.064471 / 0.424275 (-0.359804) | 0.008845 / 0.007607 (0.001238) | 0.547447 / 0.226044 (0.321402) | 5.599435 / 2.268929 (3.330506) | 2.980547 / 55.444624 (-52.464077) | 2.754908 / 6.876477 (-4.121569) | 2.832978 / 2.142072 (0.690906) | 0.635059 / 4.805227 (-4.170168) | 0.153478 / 6.500664 (-6.347187) | 0.067146 / 0.075469 (-0.008323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.555588 / 1.841788 (-0.286200) | 22.828906 / 8.074308 (14.754597) | 16.211008 / 10.191392 (6.019616) | 0.168009 / 0.680424 (-0.512415) | 0.021966 / 0.534201 (-0.512235) | 0.464872 / 0.579283 (-0.114411) | 0.460429 / 0.434364 (0.026065) | 0.530498 / 0.540337 (-0.009839) | 0.705020 / 1.386936 (-0.681916) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#deb9e703237c8310c5a6db04f54d54368e951edd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005964 / 0.011353 (-0.005389) | 0.003644 / 0.011008 (-0.007364) | 0.079607 / 0.038508 (0.041099) | 0.058387 / 0.023109 (0.035278) | 0.312226 / 0.275898 (0.036328) | 0.349206 / 0.323480 (0.025726) | 0.004715 / 0.007986 (-0.003271) | 0.002869 / 0.004328 (-0.001460) | 0.061668 / 0.004250 (0.057417) | 0.045694 / 0.037052 (0.008642) | 0.313516 / 0.258489 (0.055027) | 0.357543 / 0.293841 (0.063702) | 0.027179 / 0.128546 (-0.101367) | 0.007961 / 0.075646 (-0.067686) | 0.262473 / 0.419271 (-0.156798) | 0.045588 / 0.043533 (0.002055) | 0.313102 / 0.255139 (0.057963) | 0.368686 / 0.283200 (0.085486) | 0.020556 / 0.141683 (-0.121127) | 1.447258 / 1.452155 (-0.004897) | 1.527319 / 1.492716 (0.034602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181411) | 0.422155 / 0.000490 (0.421665) | 0.004972 / 0.000200 (0.004772) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023539 / 0.037411 (-0.013872) | 0.073055 / 0.014526 (0.058529) | 0.083631 / 0.176557 (-0.092926) | 0.145923 / 0.737135 (-0.591212) | 0.083820 / 0.296338 (-0.212518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396305 / 0.215209 (0.181096) | 3.967065 / 2.077655 (1.889410) | 2.101109 / 1.504120 (0.596989) | 1.958817 / 1.541195 (0.417622) | 2.037894 / 1.468490 (0.569404) | 0.496955 / 4.584777 (-4.087822) | 3.078948 / 3.745712 (-0.666764) | 3.363655 / 5.269862 (-1.906207) | 2.087659 / 4.565676 (-2.478018) | 0.057171 / 0.424275 (-0.367104) | 0.006410 / 0.007607 (-0.001197) | 0.470535 / 0.226044 (0.244491) | 4.715259 / 2.268929 (2.446330) | 2.355510 / 55.444624 (-53.089114) | 2.025270 / 6.876477 (-4.851207) | 2.210401 / 2.142072 (0.068329) | 0.580538 / 4.805227 (-4.224689) | 0.125068 / 6.500664 (-6.375596) | 0.059871 / 0.075469 (-0.015598) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245468 / 1.841788 (-0.596320) | 18.322042 / 8.074308 (10.247734) | 13.609726 / 10.191392 (3.418334) | 0.143623 / 0.680424 (-0.536801) | 0.017068 / 0.534201 (-0.517133) | 0.330758 / 0.579283 (-0.248525) | 0.339946 / 0.434364 (-0.094418) | 0.377861 / 0.540337 (-0.162476) | 0.524593 / 1.386936 (-0.862343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006049 / 0.011353 (-0.005304) | 0.003737 / 0.011008 (-0.007271) | 0.062816 / 0.038508 (0.024308) | 0.063768 / 0.023109 (0.040658) | 0.362001 / 0.275898 (0.086103) | 0.395251 / 0.323480 (0.071772) | 0.004823 / 0.007986 (-0.003163) | 0.002881 / 0.004328 (-0.001447) | 0.061987 / 0.004250 (0.057737) | 0.049950 / 0.037052 (0.012898) | 0.362442 / 0.258489 (0.103953) | 0.399321 / 0.293841 (0.105480) | 0.027616 / 0.128546 (-0.100930) | 0.007965 / 0.075646 (-0.067681) | 0.068584 / 0.419271 (-0.350687) | 0.044700 / 0.043533 (0.001168) | 0.361011 / 0.255139 (0.105872) | 0.386007 / 0.283200 (0.102807) | 0.024621 / 0.141683 (-0.117061) | 1.441497 / 1.452155 (-0.010657) | 1.533145 / 1.492716 (0.040429) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223446 / 0.018006 (0.205440) | 0.411147 / 0.000490 (0.410657) | 0.001821 / 0.000200 (0.001621) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025661 / 0.037411 (-0.011751) | 0.077838 / 0.014526 (0.063313) | 0.086148 / 0.176557 (-0.090408) | 0.140386 / 0.737135 (-0.596750) | 0.088793 / 0.296338 (-0.207546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425209 / 0.215209 (0.210000) | 4.250723 / 2.077655 (2.173068) | 2.403437 / 1.504120 (0.899317) | 2.283584 / 1.541195 (0.742390) | 2.326870 / 1.468490 (0.858380) | 0.504781 / 4.584777 (-4.079996) | 3.017042 / 3.745712 (-0.728670) | 4.643068 / 5.269862 (-0.626794) | 2.535710 / 4.565676 (-2.029967) | 0.058520 / 0.424275 (-0.365755) | 0.006766 / 0.007607 (-0.000841) | 0.500664 / 0.226044 (0.274620) | 5.017073 / 2.268929 (2.748145) | 2.668661 / 55.444624 (-52.775963) | 2.335486 / 6.876477 (-4.540991) | 2.486518 / 2.142072 (0.344445) | 0.598795 / 4.805227 (-4.206432) | 0.126395 / 6.500664 (-6.374269) | 0.063154 / 0.075469 (-0.012315) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.358059 / 1.841788 (-0.483728) | 18.615724 / 8.074308 (10.541416) | 13.670934 / 10.191392 (3.479542) | 0.134650 / 0.680424 (-0.545774) | 0.016941 / 0.534201 (-0.517260) | 0.335215 / 0.579283 (-0.244068) | 0.356118 / 0.434364 (-0.078246) | 0.393109 / 0.540337 (-0.147228) | 0.534165 / 1.386936 (-0.852771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da7d3b557665f34e84cd151ffe9d80b45a19fe33 \"CML watermark\")\n" ]
"2023-07-26T10:44:23Z"
"2023-07-27T12:51:51Z"
"2023-07-27T12:42:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6072.diff", "html_url": "https://github.com/huggingface/datasets/pull/6072", "merged_at": "2023-07-27T12:42:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6072.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6072" }
close https://github.com/huggingface/datasets/issues/6071
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6072/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5813/comments
https://api.github.com/repos/huggingface/datasets/issues/5813/events
https://github.com/huggingface/datasets/pull/5813
1,691,908,535
PR_kwDODunzps5Pj0_E
5,813
[DO-NOT-MERGE] Debug Windows issue at #3
{ "avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4", "events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}", "followers_url": "https://api.github.com/users/HyukjinKwon/followers", "following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}", "gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HyukjinKwon", "id": 6477701, "login": "HyukjinKwon", "node_id": "MDQ6VXNlcjY0Nzc3MDE=", "organizations_url": "https://api.github.com/users/HyukjinKwon/orgs", "received_events_url": "https://api.github.com/users/HyukjinKwon/received_events", "repos_url": "https://api.github.com/users/HyukjinKwon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions", "type": "User", "url": "https://api.github.com/users/HyukjinKwon" }
[]
closed
false
null
[]
null
[]
"2023-05-02T07:19:34Z"
"2023-05-02T07:21:30Z"
"2023-05-02T07:21:30Z"
NONE
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5813.diff", "html_url": "https://github.com/huggingface/datasets/pull/5813", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5813.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5813" }
TBD
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5813/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/411
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/411/comments
https://api.github.com/repos/huggingface/datasets/issues/411/events
https://github.com/huggingface/datasets/pull/411
659,393,398
MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy
411
Sbf
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
"2020-07-17T16:19:45Z"
"2020-07-21T09:13:46Z"
"2020-07-21T09:13:45Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/411.diff", "html_url": "https://github.com/huggingface/datasets/pull/411", "merged_at": "2020-07-21T09:13:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/411.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/411" }
This PR adds the Social Bias Frames Dataset (ACL 2020) . dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/411/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1603/comments
https://api.github.com/repos/huggingface/datasets/issues/1603/events
https://github.com/huggingface/datasets/pull/1603
770,857,221
MDExOlB1bGxSZXF1ZXN0NTQyNTIwNDkx
1,603
Add retries to HTTP requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "merging this one then :) " ]
"2020-12-18T12:41:31Z"
"2020-12-22T15:34:07Z"
"2020-12-22T15:34:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1603.diff", "html_url": "https://github.com/huggingface/datasets/pull/1603", "merged_at": "2020-12-22T15:34:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1603.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1603" }
## What does this PR do ? Adding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception. The "canonical" way to do this is to use [urllib's Retry class](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https://requests.readthedocs.io/en/master/api/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite Fixes #1102
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1603/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1603/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/987/comments
https://api.github.com/repos/huggingface/datasets/issues/987/events
https://github.com/huggingface/datasets/pull/987
755,059,469
MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4
987
Add OPUS DOGC dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-02T08:30:32Z"
"2020-12-04T13:27:41Z"
"2020-12-04T13:27:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/987.diff", "html_url": "https://github.com/huggingface/datasets/pull/987", "merged_at": "2020-12-04T13:27:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/987.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/987" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/987/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/987/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2490/comments
https://api.github.com/repos/huggingface/datasets/issues/2490/events
https://github.com/huggingface/datasets/pull/2490
919,571,385
MDExOlB1bGxSZXF1ZXN0NjY4ODc4NDA3
2,490
Allow latest pyarrow version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
[ "i need some help with this" ]
"2021-06-12T14:17:34Z"
"2021-07-06T16:54:52Z"
"2021-06-14T07:53:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2490.diff", "html_url": "https://github.com/huggingface/datasets/pull/2490", "merged_at": "2021-06-14T07:53:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2490.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2490" }
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2490/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4102/comments
https://api.github.com/repos/huggingface/datasets/issues/4102/events
https://github.com/huggingface/datasets/pull/4102
1,193,616,722
PR_kwDODunzps41roGx
4,102
[hub] Fix `api.create_repo` call?
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4102). All of your documentation changes will be reflected on that endpoint.", "Closing in favor of https://github.com/huggingface/datasets/pull/4106" ]
"2022-04-05T19:21:52Z"
"2023-09-24T10:01:14Z"
"2022-04-12T08:41:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4102.diff", "html_url": "https://github.com/huggingface/datasets/pull/4102", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4102.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4102" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4102/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2460/comments
https://api.github.com/repos/huggingface/datasets/issues/2460/events
https://github.com/huggingface/datasets/pull/2460
915,268,536
MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4
2,460
Revert default in-memory for small datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
{ "closed_at": "2021-06-08T18:51:04Z", "closed_issues": 2, "created_at": "2021-04-20T16:49:16Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-06-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/4", "id": 6680642, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels", "node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==", "number": 4, "open_issues": 0, "state": "closed", "title": "1.8", "updated_at": "2021-06-08T18:51:37Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/4" }
[ "Thank you for this welcome change guys!" ]
"2021-06-08T17:14:23Z"
"2021-06-08T18:04:14Z"
"2021-06-08T17:55:43Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2460.diff", "html_url": "https://github.com/huggingface/datasets/pull/2460", "merged_at": "2021-06-08T17:55:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/2460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2460" }
Close #2458
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2460/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4951/comments
https://api.github.com/repos/huggingface/datasets/issues/4951/events
https://github.com/huggingface/datasets/pull/4951
1,365,954,814
PR_kwDODunzps4-lDqd
4,951
Fix license information in qasc dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-08T10:04:39Z"
"2022-09-08T14:54:47Z"
"2022-09-08T14:52:05Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4951.diff", "html_url": "https://github.com/huggingface/datasets/pull/4951", "merged_at": "2022-09-08T14:52:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/4951.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4951" }
This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0: - https://github.com/allenai/qasc/issues/5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4951/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4951/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/762/comments
https://api.github.com/repos/huggingface/datasets/issues/762/events
https://github.com/huggingface/datasets/issues/762
730,586,972
MDU6SXNzdWU3MzA1ODY5NzI=
762
[GEM] Add Czech Restaurant data-to-text generation dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
"2020-10-27T16:00:47Z"
"2020-12-03T13:37:44Z"
"2020-12-03T13:37:44Z"
MEMBER
null
null
null
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf - Data: https://github.com/UFAL-DSG/cs_restaurant_dataset - The dataset will likely be part of the GEM benchmark
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/762/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5826/comments
https://api.github.com/repos/huggingface/datasets/issues/5826/events
https://github.com/huggingface/datasets/pull/5826
1,698,155,751
PR_kwDODunzps5P5FYZ
5,826
Support working_dir in from_spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added env var", "@lhoestq would you or another maintainer be able to review please? :)", "I removed the env var", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005771 / 0.011353 (-0.005582) | 0.004086 / 0.011008 (-0.006922) | 0.097170 / 0.038508 (0.058661) | 0.027464 / 0.023109 (0.004355) | 0.305425 / 0.275898 (0.029527) | 0.343869 / 0.323480 (0.020389) | 0.004899 / 0.007986 (-0.003087) | 0.003294 / 0.004328 (-0.001034) | 0.074710 / 0.004250 (0.070459) | 0.034982 / 0.037052 (-0.002070) | 0.306063 / 0.258489 (0.047574) | 0.343115 / 0.293841 (0.049274) | 0.025155 / 0.128546 (-0.103392) | 0.008429 / 0.075646 (-0.067217) | 0.318680 / 0.419271 (-0.100591) | 0.043304 / 0.043533 (-0.000229) | 0.306703 / 0.255139 (0.051564) | 0.335535 / 0.283200 (0.052335) | 0.087428 / 0.141683 (-0.054255) | 1.483769 / 1.452155 (0.031614) | 1.538753 / 1.492716 (0.046037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203313 / 0.018006 (0.185307) | 0.413864 / 0.000490 (0.413375) | 0.003186 / 0.000200 (0.002986) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022862 / 0.037411 (-0.014550) | 0.097306 / 0.014526 (0.082780) | 0.102823 / 0.176557 (-0.073733) | 0.162803 / 0.737135 (-0.574333) | 0.106311 / 0.296338 (-0.190028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451710 / 0.215209 (0.236501) | 4.508520 / 2.077655 (2.430865) | 2.181118 / 1.504120 (0.676998) | 1.977607 / 1.541195 (0.436412) | 2.008366 / 1.468490 (0.539876) | 0.565388 / 4.584777 (-4.019389) | 3.439318 / 3.745712 (-0.306394) | 1.747512 / 5.269862 (-3.522349) | 1.102124 / 4.565676 (-3.463553) | 0.069212 / 0.424275 (-0.355063) | 0.011926 / 0.007607 (0.004318) | 0.553414 / 0.226044 (0.327370) | 5.548959 / 2.268929 (3.280031) | 2.628769 / 55.444624 (-52.815856) | 2.301003 / 6.876477 (-4.575473) | 2.341744 / 2.142072 (0.199672) | 0.673092 / 4.805227 (-4.132135) | 0.137722 / 6.500664 (-6.362942) | 0.066909 / 0.075469 (-0.008560) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196854 / 1.841788 (-0.644934) | 13.421776 / 8.074308 (5.347468) | 13.839760 / 10.191392 (3.648368) | 0.140557 / 0.680424 (-0.539867) | 0.016619 / 0.534201 (-0.517582) | 0.357985 / 0.579283 (-0.221298) | 0.387018 / 0.434364 (-0.047346) | 0.452798 / 0.540337 (-0.087540) | 0.542085 / 1.386936 (-0.844851) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005868 / 0.011353 (-0.005484) | 0.004103 / 0.011008 (-0.006905) | 0.076126 / 0.038508 (0.037618) | 0.027744 / 0.023109 (0.004635) | 0.357257 / 0.275898 (0.081359) | 0.387981 / 0.323480 (0.064501) | 0.004807 / 0.007986 (-0.003178) | 0.003337 / 0.004328 (-0.000991) | 0.075486 / 0.004250 (0.071236) | 0.035121 / 0.037052 (-0.001931) | 0.361385 / 0.258489 (0.102896) | 0.399346 / 0.293841 (0.105505) | 0.025263 / 0.128546 (-0.103284) | 0.008571 / 0.075646 (-0.067075) | 0.081815 / 0.419271 (-0.337457) | 0.041114 / 0.043533 (-0.002418) | 0.362840 / 0.255139 (0.107701) | 0.380926 / 0.283200 (0.097727) | 0.092728 / 0.141683 (-0.048955) | 1.517647 / 1.452155 (0.065492) | 1.534914 / 1.492716 (0.042198) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199669 / 0.018006 (0.181663) | 0.399070 / 0.000490 (0.398580) | 0.002014 / 0.000200 (0.001814) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024541 / 0.037411 (-0.012870) | 0.099676 / 0.014526 (0.085151) | 0.106503 / 0.176557 (-0.070054) | 0.153755 / 0.737135 (-0.583380) | 0.108564 / 0.296338 (-0.187775) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443842 / 0.215209 (0.228633) | 4.441158 / 2.077655 (2.363503) | 2.159496 / 1.504120 (0.655376) | 1.955358 / 1.541195 (0.414163) | 1.973864 / 1.468490 (0.505374) | 0.550467 / 4.584777 (-4.034310) | 3.381831 / 3.745712 (-0.363881) | 2.561192 / 5.269862 (-2.708670) | 1.361684 / 4.565676 (-3.203992) | 0.068140 / 0.424275 (-0.356135) | 0.012005 / 0.007607 (0.004398) | 0.551921 / 0.226044 (0.325877) | 5.503591 / 2.268929 (3.234662) | 2.591609 / 55.444624 (-52.853015) | 2.246681 / 6.876477 (-4.629796) | 2.290941 / 2.142072 (0.148868) | 0.655212 / 4.805227 (-4.150015) | 0.136013 / 6.500664 (-6.364651) | 0.066995 / 0.075469 (-0.008474) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300438 / 1.841788 (-0.541350) | 13.866224 / 8.074308 (5.791916) | 13.932624 / 10.191392 (3.741232) | 0.144345 / 0.680424 (-0.536079) | 0.016623 / 0.534201 (-0.517578) | 0.357629 / 0.579283 (-0.221654) | 0.389759 / 0.434364 (-0.044605) | 0.417704 / 0.540337 (-0.122633) | 0.501358 / 1.386936 (-0.885578) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#89f775226321ba94e5bf4670a323c0fb44f5f65c \"CML watermark\")\n", "Thank you!" ]
"2023-05-05T20:22:40Z"
"2023-05-25T17:45:54Z"
"2023-05-25T08:46:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5826.diff", "html_url": "https://github.com/huggingface/datasets/pull/5826", "merged_at": "2023-05-25T08:46:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5826" }
Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5826/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3313/comments
https://api.github.com/repos/huggingface/datasets/issues/3313/events
https://github.com/huggingface/datasets/issues/3313
1,060,933,392
I_kwDODunzps4_PI8Q
3,313
TriviaQA License Mismatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/16665267?v=4", "events_url": "https://api.github.com/users/akhilkedia/events{/privacy}", "followers_url": "https://api.github.com/users/akhilkedia/followers", "following_url": "https://api.github.com/users/akhilkedia/following{/other_user}", "gists_url": "https://api.github.com/users/akhilkedia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akhilkedia", "id": 16665267, "login": "akhilkedia", "node_id": "MDQ6VXNlcjE2NjY1MjY3", "organizations_url": "https://api.github.com/users/akhilkedia/orgs", "received_events_url": "https://api.github.com/users/akhilkedia/received_events", "repos_url": "https://api.github.com/users/akhilkedia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akhilkedia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akhilkedia/subscriptions", "type": "User", "url": "https://api.github.com/users/akhilkedia" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! You're completely right, this must be mentioned in the dataset card.\r\nIf you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the \"Licensing Information\" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md" ]
"2021-11-23T08:00:15Z"
"2021-11-29T11:24:21Z"
"2021-11-29T11:24:21Z"
NONE
null
null
null
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3313/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3289/comments
https://api.github.com/repos/huggingface/datasets/issues/3289/events
https://github.com/huggingface/datasets/pull/3289
1,056,323,715
PR_kwDODunzps4uqf79
3,289
Unpin markdown for build_docs now that it's fixed
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-11-17T16:22:53Z"
"2021-11-17T16:23:09Z"
"2021-11-17T16:23:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3289.diff", "html_url": "https://github.com/huggingface/datasets/pull/3289", "merged_at": "2021-11-17T16:23:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/3289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3289" }
`markdown`'s bug has been fixed, so this PR reverts #3286
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3289/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1791/comments
https://api.github.com/repos/huggingface/datasets/issues/1791/events
https://github.com/huggingface/datasets/pull/1791
796,924,519
MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3
1,791
Small fix with corrected logging of train vectors
{ "avatar_url": "https://avatars.githubusercontent.com/u/7549587?v=4", "events_url": "https://api.github.com/users/TezRomacH/events{/privacy}", "followers_url": "https://api.github.com/users/TezRomacH/followers", "following_url": "https://api.github.com/users/TezRomacH/following{/other_user}", "gists_url": "https://api.github.com/users/TezRomacH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TezRomacH", "id": 7549587, "login": "TezRomacH", "node_id": "MDQ6VXNlcjc1NDk1ODc=", "organizations_url": "https://api.github.com/users/TezRomacH/orgs", "received_events_url": "https://api.github.com/users/TezRomacH/received_events", "repos_url": "https://api.github.com/users/TezRomacH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TezRomacH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TezRomacH/subscriptions", "type": "User", "url": "https://api.github.com/users/TezRomacH" }
[]
closed
false
null
[]
null
[]
"2021-01-29T14:26:06Z"
"2021-01-29T18:51:10Z"
"2021-01-29T17:05:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1791.diff", "html_url": "https://github.com/huggingface/datasets/pull/1791", "merged_at": "2021-01-29T17:05:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1791.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1791" }
Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1791/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/826/comments
https://api.github.com/repos/huggingface/datasets/issues/826/events
https://github.com/huggingface/datasets/issues/826
739,976,716
MDU6SXNzdWU3Mzk5NzY3MTY=
826
[GEM] Add E2E dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
"2020-11-10T14:50:40Z"
"2020-12-03T13:37:57Z"
"2020-12-03T13:37:57Z"
MEMBER
null
null
null
## Adding a Dataset - **Name:** E2E NLG dataset (for End-to-end natural language generation) - **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average - **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931 - **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data - **Motivation:** This dataset will likely be included in the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/826/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3297/comments
https://api.github.com/repos/huggingface/datasets/issues/3297/events
https://github.com/huggingface/datasets/issues/3297
1,058,263,859
I_kwDODunzps4_E9Mz
3,297
.map() cache is wrongfully reused - only happens when the mapping function is imported
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their locations into account", "I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.\r\nIn the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful.", "For anyone interested, I see that with `dill==0.3.6` the workaround I suggested doesn't work anymore.\r\nI opened an issue about it: https://github.com/uqfoundation/dill/issues/572.\r\n\r\n " ]
"2021-11-19T08:18:36Z"
"2023-01-30T12:40:17Z"
null
CONTRIBUTOR
null
null
null
## Describe the bug When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified. The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411). I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably. ## Steps to reproduce the bug Create files `a.py` and `b.py`: ```python # a.py from datasets import load_dataset def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples if __name__ == "__main__": main() ``` ```python # b.py from datasets import load_dataset from a import mapping_func def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) if __name__ == "__main__": main() ``` Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function. ## Expected results Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result. ## Workaround Put the mapping function inside a dummy class as a static method: ```python # a.py class MappingFuncClass: @staticmethod def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples ``` ```python # b.py from datasets import load_dataset from a import MappingFuncClass def main(): squad = load_dataset("squad") squad.map(MappingFuncClass.mapping_func, batched=True) if __name__ == "__main__": main() ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3297/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3297/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/143/comments
https://api.github.com/repos/huggingface/datasets/issues/143/events
https://github.com/huggingface/datasets/issues/143
619,457,641
MDU6SXNzdWU2MTk0NTc2NDE=
143
ArrowTypeError in squad metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
null
[]
null
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```" ]
"2020-05-16T12:06:37Z"
"2020-05-22T13:38:52Z"
"2020-05-22T13:36:48Z"
MEMBER
null
null
null
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/143/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3241/comments
https://api.github.com/repos/huggingface/datasets/issues/3241/events
https://github.com/huggingface/datasets/pull/3241
1,048,461,852
PR_kwDODunzps4uRzHa
3,241
Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-11-09T10:54:15Z"
"2022-02-14T15:46:00Z"
"2021-11-09T13:49:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3241.diff", "html_url": "https://github.com/huggingface/datasets/pull/3241", "merged_at": "2021-11-09T13:49:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3241.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3241" }
Fix #3237, fix #795.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3241/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3241/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3717/comments
https://api.github.com/repos/huggingface/datasets/issues/3717/events
https://github.com/huggingface/datasets/issues/3717
1,137,183,015
I_kwDODunzps5DyAkn
3,717
wrong condition in `Features ClassLabel encode_example`
{ "avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4", "events_url": "https://api.github.com/users/Tudyx/events{/privacy}", "followers_url": "https://api.github.com/users/Tudyx/followers", "following_url": "https://api.github.com/users/Tudyx/following{/other_user}", "gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tudyx", "id": 56633664, "login": "Tudyx", "node_id": "MDQ6VXNlcjU2NjMzNjY0", "organizations_url": "https://api.github.com/users/Tudyx/orgs", "received_events_url": "https://api.github.com/users/Tudyx/received_events", "repos_url": "https://api.github.com/users/Tudyx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions", "type": "User", "url": "https://api.github.com/users/Tudyx" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @Tudyx, \r\n\r\nPlease note that in Python, the boolean NOT operator (`not`) has lower precedence than comparison operators (`<=`, `<`), thus the expression you mention is equivalent to:\r\n```python\r\n not (-1 <= example_data < self.num_classes)\r\n```\r\n\r\nAlso note that as expected, the exception is raised if:\r\n- `example_data < -1`\r\n- or `example_data >= self.num_classes`\r\n\r\nThe raise of the exception is expected when `example_data` equals 4 and `self.num_classes` equals 4 too." ]
"2022-02-14T11:44:35Z"
"2022-02-14T15:09:36Z"
"2022-02-14T15:07:43Z"
NONE
null
null
null
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` condition change the result of the condition. For instance, if `example_data` equals 4 and ` self.num_classes` equals 4 too, `example_data < self.num_classes` will give `False` as expected . But if i add the `not - 1` condition, `not -1 <= example_data < self.num_classes` will give `True` and raise an exception. ## Environment info - `datasets` version: 1.18.3 - Python version: 3.8.10 - PyArrow version: 7.00
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3717/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6121/comments
https://api.github.com/repos/huggingface/datasets/issues/6121/events
https://github.com/huggingface/datasets/pull/6121
1,836,761,712
PR_kwDODunzps5XMsWd
6,121
Small typo in the code example of create imagefolder dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4", "events_url": "https://api.github.com/users/WangXin93/events{/privacy}", "followers_url": "https://api.github.com/users/WangXin93/followers", "following_url": "https://api.github.com/users/WangXin93/following{/other_user}", "gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WangXin93", "id": 19688994, "login": "WangXin93", "node_id": "MDQ6VXNlcjE5Njg4OTk0", "organizations_url": "https://api.github.com/users/WangXin93/orgs", "received_events_url": "https://api.github.com/users/WangXin93/received_events", "repos_url": "https://api.github.com/users/WangXin93/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions", "type": "User", "url": "https://api.github.com/users/WangXin93" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin" ]
"2023-08-04T13:36:59Z"
"2023-08-04T13:45:32Z"
"2023-08-04T13:41:43Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6121.diff", "html_url": "https://github.com/huggingface/datasets/pull/6121", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6121.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6121" }
Fix type of code example of load imagefolder dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6046/comments
https://api.github.com/repos/huggingface/datasets/issues/6046/events
https://github.com/huggingface/datasets/issues/6046
1,808,154,414
I_kwDODunzps5rxj8u
6,046
Support proxy and user-agent in fsspec calls
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues", "id": 3761482852, "name": "good second issue", "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4", "events_url": "https://api.github.com/users/zutarich/events{/privacy}", "followers_url": "https://api.github.com/users/zutarich/followers", "following_url": "https://api.github.com/users/zutarich/following{/other_user}", "gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zutarich", "id": 95092167, "login": "zutarich", "node_id": "U_kgDOBar9xw", "organizations_url": "https://api.github.com/users/zutarich/orgs", "received_events_url": "https://api.github.com/users/zutarich/received_events", "repos_url": "https://api.github.com/users/zutarich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zutarich/subscriptions", "type": "User", "url": "https://api.github.com/users/zutarich" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/95092167?v=4", "events_url": "https://api.github.com/users/zutarich/events{/privacy}", "followers_url": "https://api.github.com/users/zutarich/followers", "following_url": "https://api.github.com/users/zutarich/following{/other_user}", "gists_url": "https://api.github.com/users/zutarich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zutarich", "id": 95092167, "login": "zutarich", "node_id": "U_kgDOBar9xw", "organizations_url": "https://api.github.com/users/zutarich/orgs", "received_events_url": "https://api.github.com/users/zutarich/received_events", "repos_url": "https://api.github.com/users/zutarich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zutarich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zutarich/subscriptions", "type": "User", "url": "https://api.github.com/users/zutarich" } ]
null
[ "hii @lhoestq can you assign this issue to me?\r\n", "You can reply \"#self-assign\" to this issue to automatically get assigned to it :)\r\nLet me know if you have any questions or if I can help", "#2289 ", "Actually i am quite new to figure it out how everything goes and done \r\n\r\n> You can reply \"#self-assign\" to this issue to automatically get assigned to it :)\r\n> Let me know if you have any questions or if I can help\r\n\r\nwhen i wrote #self-assign it automatically got converted to some number is it correct or i have done it some wrong way, I am quite new to open source thus wanna try to learn and explore it", "#2289 #self-assign ", "Ah yea github tries to replace the #self-assign with an issue link. I guess you can try to copy-paste instead to see if it works\r\n\r\nAnyway let me assign you manually", "thanks a lot @lhoestq ! though i have a very lil idea of the issue, i am new. as i said before, but gonna try my best shot to do it.\r\ncan you please suggest some tips or anything from your side, how basically we approach it will be really helpfull.\r\nWill try my best!", "The HfFileSystem from the `huggingface_hub` package can already read the HTTP_PROXY and HTTPS_PROXY environment variables. So the remaining thing missing is the `user_agent` that the user may include in a `DownloadConfig` object. The user agent can be used for regular http calls but also calls to the HfFileSystem.\r\n\r\n- for http, the `user_agent` isn't passed from `DownloadConfig` to `get_datasets_user_agent` in `_prepare_single_hop_path_and_storage_options` in `streaming_download_manager.py` so we need to include it\r\n- for HfFileSystem I think it requires a PR in https://github.com/huggingface/huggingface_hub to include it in the `HfFileSystem.__init__`" ]
"2023-07-17T16:39:26Z"
"2023-10-09T13:49:14Z"
null
MEMBER
null
null
null
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent. Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub. This can be implemented in `_prepare_single_hop_path_and_storage_options`. Though ideally the `HfFileSystem` could support passing at least the proxies
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6046/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5454/comments
https://api.github.com/repos/huggingface/datasets/issues/5454/events
https://github.com/huggingface/datasets/issues/5454
1,552,890,419
I_kwDODunzps5cjzoz
5,454
Save and resume the state of a DataLoader
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
[ "Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.", "Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra feature. In Megatron-Deepspeed we manually drained the dataloader for the range we wanted. I wasn't very satisfied with the way we did it, since its behavior would change if you were to do multiple range skips. I think it should remember all the ranges it skipped and not just skip the last range - since otherwise the data is inconsistent (but we probably should discuss this in a separate issue not to derail this much bigger one)." ]
"2023-01-23T10:58:54Z"
"2023-01-24T01:45:48Z"
null
MEMBER
null
null
null
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker. For iterable datasets, this requires to save the state of the dataset iterator, which includes: - the current shard idx and row position in the current shard - the epoch number - the rng state - the shuffle buffer Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point. cc @stas00 @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5454/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008337 / 0.011353 (-0.003016) | 0.005588 / 0.011008 (-0.005421) | 0.110259 / 0.038508 (0.071751) | 0.038928 / 0.023109 (0.015819) | 0.350441 / 0.275898 (0.074543) | 0.378473 / 0.323480 (0.054993) | 0.006369 / 0.007986 (-0.001616) | 0.005730 / 0.004328 (0.001401) | 0.083042 / 0.004250 (0.078792) | 0.048686 / 0.037052 (0.011634) | 0.367561 / 0.258489 (0.109072) | 0.398073 / 0.293841 (0.104232) | 0.043247 / 0.128546 (-0.085299) | 0.013862 / 0.075646 (-0.061785) | 0.386745 / 0.419271 (-0.032527) | 0.060107 / 0.043533 (0.016574) | 0.345450 / 0.255139 (0.090311) | 0.371269 / 0.283200 (0.088069) | 0.117508 / 0.141683 (-0.024175) | 1.689345 / 1.452155 (0.237191) | 1.777119 / 1.492716 (0.284402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248248 / 0.018006 (0.230242) | 0.505200 / 0.000490 (0.504710) | 0.015354 / 0.000200 (0.015155) | 0.000794 / 0.000054 (0.000740) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030179 / 0.037411 (-0.007232) | 0.118583 / 0.014526 (0.104057) | 0.131546 / 0.176557 (-0.045010) | 0.196173 / 0.737135 (-0.540962) | 0.140532 / 0.296338 (-0.155807) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.470733 / 0.215209 (0.255524) | 4.758868 / 2.077655 (2.681213) | 2.246731 / 1.504120 (0.742611) | 1.995232 / 1.541195 (0.454037) | 2.057596 / 1.468490 (0.589106) | 0.819227 / 4.584777 (-3.765550) | 4.472093 / 3.745712 (0.726381) | 2.428154 / 5.269862 (-2.841708) | 1.748023 / 4.565676 (-2.817654) | 0.101965 / 0.424275 (-0.322310) | 0.014706 / 0.007607 (0.007098) | 0.600593 / 0.226044 (0.374548) | 5.869565 / 2.268929 (3.600637) | 2.764890 / 55.444624 (-52.679735) | 2.332112 / 6.876477 (-4.544364) | 2.486190 / 2.142072 (0.344118) | 0.979123 / 4.805227 (-3.826104) | 0.199543 / 6.500664 (-6.301121) | 0.075906 / 0.075469 (0.000436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397694 / 1.841788 (-0.444094) | 16.910500 / 8.074308 (8.836192) | 16.174131 / 10.191392 (5.982739) | 0.173975 / 0.680424 (-0.506449) | 0.021403 / 0.534201 (-0.512798) | 0.496187 / 0.579283 (-0.083096) | 0.487369 / 0.434364 (0.053005) | 0.565924 / 0.540337 (0.025587) | 0.684965 / 1.386936 (-0.701971) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008253 / 0.011353 (-0.003100) | 0.005745 / 0.011008 (-0.005263) | 0.085848 / 0.038508 (0.047340) | 0.038753 / 0.023109 (0.015644) | 0.401278 / 0.275898 (0.125379) | 0.433132 / 0.323480 (0.109652) | 0.006112 / 0.007986 (-0.001874) | 0.005973 / 0.004328 (0.001644) | 0.085339 / 0.004250 (0.081088) | 0.053297 / 0.037052 (0.016244) | 0.400265 / 0.258489 (0.141776) | 0.455155 / 0.293841 (0.161314) | 0.043116 / 0.128546 (-0.085430) | 0.013957 / 0.075646 (-0.061689) | 0.099507 / 0.419271 (-0.319764) | 0.058858 / 0.043533 (0.015325) | 0.398030 / 0.255139 (0.142891) | 0.418171 / 0.283200 (0.134971) | 0.114392 / 0.141683 (-0.027291) | 1.683102 / 1.452155 (0.230947) | 1.801427 / 1.492716 (0.308711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242271 / 0.018006 (0.224265) | 0.494920 / 0.000490 (0.494430) | 0.007328 / 0.000200 (0.007128) | 0.000144 / 0.000054 (0.000090) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034061 / 0.037411 (-0.003351) | 0.146417 / 0.014526 (0.131891) | 0.161079 / 0.176557 (-0.015477) | 0.213999 / 0.737135 (-0.523137) | 0.166704 / 0.296338 (-0.129634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491214 / 0.215209 (0.276005) | 4.846946 / 2.077655 (2.769291) | 2.352595 / 1.504120 (0.848475) | 2.114055 / 1.541195 (0.572860) | 2.213537 / 1.468490 (0.745047) | 0.799625 / 4.584777 (-3.785152) | 4.440519 / 3.745712 (0.694807) | 4.476103 / 5.269862 (-0.793758) | 2.249384 / 4.565676 (-2.316292) | 0.098807 / 0.424275 (-0.325468) | 0.014463 / 0.007607 (0.006856) | 0.611793 / 0.226044 (0.385748) | 6.045710 / 2.268929 (3.776782) | 2.865957 / 55.444624 (-52.578667) | 2.454052 / 6.876477 (-4.422425) | 2.606153 / 2.142072 (0.464080) | 0.969057 / 4.805227 (-3.836170) | 0.198499 / 6.500664 (-6.302166) | 0.077012 / 0.075469 (0.001543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.497020 / 1.841788 (-0.344767) | 17.834277 / 8.074308 (9.759969) | 16.413792 / 10.191392 (6.222400) | 0.201979 / 0.680424 (-0.478445) | 0.020627 / 0.534201 (-0.513574) | 0.499767 / 0.579283 (-0.079516) | 0.496982 / 0.434364 (0.062618) | 0.579554 / 0.540337 (0.039216) | 0.693287 / 1.386936 (-0.693649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a1a3fee942ae159ff6cfe6a23b343605e7e12f55 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007461 / 0.011353 (-0.003892) | 0.005341 / 0.011008 (-0.005668) | 0.099252 / 0.038508 (0.060744) | 0.034723 / 0.023109 (0.011614) | 0.300980 / 0.275898 (0.025082) | 0.353860 / 0.323480 (0.030380) | 0.006100 / 0.007986 (-0.001885) | 0.004149 / 0.004328 (-0.000180) | 0.074765 / 0.004250 (0.070514) | 0.052226 / 0.037052 (0.015174) | 0.305098 / 0.258489 (0.046609) | 0.357445 / 0.293841 (0.063604) | 0.036129 / 0.128546 (-0.092417) | 0.012482 / 0.075646 (-0.063165) | 0.333321 / 0.419271 (-0.085951) | 0.050489 / 0.043533 (0.006956) | 0.294728 / 0.255139 (0.039589) | 0.322722 / 0.283200 (0.039523) | 0.101226 / 0.141683 (-0.040456) | 1.436787 / 1.452155 (-0.015367) | 1.515784 / 1.492716 (0.023068) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291836 / 0.018006 (0.273830) | 0.550735 / 0.000490 (0.550245) | 0.003828 / 0.000200 (0.003628) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028490 / 0.037411 (-0.008922) | 0.109543 / 0.014526 (0.095017) | 0.119451 / 0.176557 (-0.057105) | 0.176721 / 0.737135 (-0.560415) | 0.126711 / 0.296338 (-0.169628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418863 / 0.215209 (0.203654) | 4.179167 / 2.077655 (2.101512) | 1.965126 / 1.504120 (0.461006) | 1.775544 / 1.541195 (0.234349) | 1.882667 / 1.468490 (0.414177) | 0.709201 / 4.584777 (-3.875576) | 3.754780 / 3.745712 (0.009068) | 2.175324 / 5.269862 (-3.094538) | 1.477454 / 4.565676 (-3.088223) | 0.085527 / 0.424275 (-0.338748) | 0.012685 / 0.007607 (0.005078) | 0.514276 / 0.226044 (0.288231) | 5.140518 / 2.268929 (2.871589) | 2.436011 / 55.444624 (-53.008614) | 2.114355 / 6.876477 (-4.762122) | 2.278893 / 2.142072 (0.136821) | 0.847825 / 4.805227 (-3.957402) | 0.169579 / 6.500664 (-6.331086) | 0.065306 / 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190376 / 1.841788 (-0.651411) | 14.756581 / 8.074308 (6.682272) | 14.622610 / 10.191392 (4.431218) | 0.168186 / 0.680424 (-0.512238) | 0.017527 / 0.534201 (-0.516674) | 0.427808 / 0.579283 (-0.151475) | 0.437278 / 0.434364 (0.002914) | 0.509242 / 0.540337 (-0.031095) | 0.602500 / 1.386936 (-0.784436) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007331 / 0.011353 (-0.004022) | 0.005703 / 0.011008 (-0.005305) | 0.074992 / 0.038508 (0.036484) | 0.034069 / 0.023109 (0.010960) | 0.343513 / 0.275898 (0.067615) | 0.369061 / 0.323480 (0.045582) | 0.006034 / 0.007986 (-0.001951) | 0.004344 / 0.004328 (0.000016) | 0.074678 / 0.004250 (0.070428) | 0.052262 / 0.037052 (0.015210) | 0.364758 / 0.258489 (0.106269) | 0.401130 / 0.293841 (0.107289) | 0.037635 / 0.128546 (-0.090912) | 0.012599 / 0.075646 (-0.063047) | 0.086935 / 0.419271 (-0.332337) | 0.058161 / 0.043533 (0.014628) | 0.338727 / 0.255139 (0.083589) | 0.355957 / 0.283200 (0.072757) | 0.111607 / 0.141683 (-0.030076) | 1.454357 / 1.452155 (0.002202) | 1.591529 / 1.492716 (0.098813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284379 / 0.018006 (0.266373) | 0.550720 / 0.000490 (0.550230) | 0.002868 / 0.000200 (0.002668) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028876 / 0.037411 (-0.008535) | 0.110892 / 0.014526 (0.096366) | 0.122519 / 0.176557 (-0.054038) | 0.169774 / 0.737135 (-0.567361) | 0.129381 / 0.296338 (-0.166957) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429181 / 0.215209 (0.213972) | 4.251016 / 2.077655 (2.173361) | 2.056778 / 1.504120 (0.552658) | 1.860458 / 1.541195 (0.319264) | 1.958923 / 1.468490 (0.490432) | 0.712667 / 4.584777 (-3.872110) | 3.856910 / 3.745712 (0.111198) | 3.374535 / 5.269862 (-1.895327) | 1.846744 / 4.565676 (-2.718932) | 0.087238 / 0.424275 (-0.337037) | 0.012718 / 0.007607 (0.005111) | 0.524654 / 0.226044 (0.298609) | 5.209756 / 2.268929 (2.940827) | 2.494882 / 55.444624 (-52.949743) | 2.201150 / 6.876477 (-4.675327) | 2.274189 / 2.142072 (0.132117) | 0.844728 / 4.805227 (-3.960499) | 0.167467 / 6.500664 (-6.333197) | 0.064018 / 0.075469 (-0.011451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273284 / 1.841788 (-0.568503) | 15.104413 / 8.074308 (7.030105) | 15.134025 / 10.191392 (4.942633) | 0.147568 / 0.680424 (-0.532856) | 0.017429 / 0.534201 (-0.516772) | 0.422052 / 0.579283 (-0.157231) | 0.425786 / 0.434364 (-0.008578) | 0.491753 / 0.540337 (-0.048584) | 0.585091 / 1.386936 (-0.801845) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f3d26e74898e0a9dc0d78490104e2e173269ef5b \"CML watermark\")\n" ]
"2023-03-15T20:02:54Z"
"2023-03-16T14:20:44Z"
"2023-03-16T14:12:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5644.diff", "html_url": "https://github.com/huggingface/datasets/pull/5644", "merged_at": "2023-03-16T14:12:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5644" }
To address https://github.com/huggingface/datasets/discussions/5593.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2894/comments
https://api.github.com/repos/huggingface/datasets/issues/2894/events
https://github.com/huggingface/datasets/pull/2894
993,375,654
MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5
2,894
Fix COUNTER dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-09-10T16:07:29Z"
"2021-09-10T16:27:45Z"
"2021-09-10T16:27:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "html_url": "https://github.com/huggingface/datasets/pull/2894", "merged_at": "2021-09-10T16:27:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894" }
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2894/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2894/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5300/comments
https://api.github.com/repos/huggingface/datasets/issues/5300/events
https://github.com/huggingface/datasets/pull/5300
1,464,697,136
PR_kwDODunzps5Dt3uK
5,300
Use same `num_proc` for dataset download and generation
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)" ]
"2022-11-25T15:37:42Z"
"2022-12-07T12:55:39Z"
"2022-12-07T12:52:51Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5300.diff", "html_url": "https://github.com/huggingface/datasets/pull/5300", "merged_at": "2022-12-07T12:52:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/5300.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5300" }
Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly).
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5300/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5300/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1917/comments
https://api.github.com/repos/huggingface/datasets/issues/1917/events
https://github.com/huggingface/datasets/issues/1917
812,390,178
MDU6SXNzdWU4MTIzOTAxNzg=
1,917
UnicodeDecodeError: windows 10 machine
{ "avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4", "events_url": "https://api.github.com/users/yosiasz/events{/privacy}", "followers_url": "https://api.github.com/users/yosiasz/followers", "following_url": "https://api.github.com/users/yosiasz/following{/other_user}", "gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yosiasz", "id": 900951, "login": "yosiasz", "node_id": "MDQ6VXNlcjkwMDk1MQ==", "organizations_url": "https://api.github.com/users/yosiasz/orgs", "received_events_url": "https://api.github.com/users/yosiasz/received_events", "repos_url": "https://api.github.com/users/yosiasz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions", "type": "User", "url": "https://api.github.com/users/yosiasz" }
[]
closed
false
null
[]
null
[ "upgraded to php 3.9.2 and it works!" ]
"2021-02-19T22:13:05Z"
"2021-02-19T22:41:11Z"
"2021-02-19T22:40:28Z"
NONE
null
null
null
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1917/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3818/comments
https://api.github.com/repos/huggingface/datasets/issues/3818/events
https://github.com/huggingface/datasets/issues/3818
1,158,788,545
I_kwDODunzps5FEbXB
3,818
Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI
{ "avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4", "events_url": "https://api.github.com/users/lmvasque/events{/privacy}", "followers_url": "https://api.github.com/users/lmvasque/followers", "following_url": "https://api.github.com/users/lmvasque/following{/other_user}", "gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lmvasque", "id": 6901031, "login": "lmvasque", "node_id": "MDQ6VXNlcjY5MDEwMzE=", "organizations_url": "https://api.github.com/users/lmvasque/orgs", "received_events_url": "https://api.github.com/users/lmvasque/received_events", "repos_url": "https://api.github.com/users/lmvasque/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions", "type": "User", "url": "https://api.github.com/users/lmvasque" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?", "Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:\r\n```\r\n features=datasets.Features(\r\n {\r\n \"sources\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"predictions\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"references\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"references\"),\r\n }\r\n ),\r\n```\r\n\r\nBut that only avoids a failure in `encode_batch` in the `add_batch` method:\r\n```\r\n batch = {\"predictions\": predictions, \"references\": references}\r\n batch = self.info.features.encode_batch(batch)\r\n```\r\n\r\nThe real problem is that `add_batch()`, `add()` and `compute()` does not receive a `sources` param:\r\n```\r\ndef add_batch(self, *, predictions=None, references=None):\r\ndef add(self, *, prediction=None, reference=None):\r\ndef compute(self, *, predictions=None, references=None, **kwargs)\r\n```\r\n\r\nAnd then, it fails:\r\n`TypeError: add_batch() got an unexpected keyword argument sources`\r\n\r\nI need this for adding any metric based on SARI or alike, not only for sari.py :)\r\n\r\nLet me know if I understood correctly the proposed solution.\r\n", "The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR." ]
"2022-03-03T18:57:54Z"
"2022-03-04T18:04:21Z"
"2022-03-04T18:04:21Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input. For example, when the `add_batch` method is used, then the `compute()` method fails: ``` metric = load_metric("sari") metric.add_batch( predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) metric.compute() > TypeError: _compute() missing 1 required positional argument: 'sources' ``` Therefore, the `compute() `method can only be used standalone: ``` metric = load_metric("sari") result = metric.compute( sources=["About 95 species are currently accepted ."], predictions=["About 95 you now get in ."], references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]) > {'sari': 26.953601953601954} ``` **Describe the solution you'd like** Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class. ``` add_batch(*, sources=None, predictions=None, references=None, **kwargs) add(*, sources=None, predictions=None, references=None, **kwargs) compute() ``` **Describe alternatives you've considered** I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch). **Additional context** These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3818/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1632/comments
https://api.github.com/repos/huggingface/datasets/issues/1632/events
https://github.com/huggingface/datasets/issues/1632
774,388,625
MDU6SXNzdWU3NzQzODg2MjU=
1,632
SICK dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
"2020-12-24T12:40:14Z"
"2021-02-05T15:49:25Z"
"2021-02-05T15:49:25Z"
CONTRIBUTOR
null
null
null
Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. ## Adding a Dataset - **Name:** SICK - **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. - **Paper:** https://www.aclweb.org/anthology/L14-1314/ - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1632/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1181/comments
https://api.github.com/repos/huggingface/datasets/issues/1181/events
https://github.com/huggingface/datasets/pull/1181
757,791,992
MDExOlB1bGxSZXF1ZXN0NTMzMTAwNjYz
1,181
added emotions detection in arabic dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28743265?v=4", "events_url": "https://api.github.com/users/abdulelahsm/events{/privacy}", "followers_url": "https://api.github.com/users/abdulelahsm/followers", "following_url": "https://api.github.com/users/abdulelahsm/following{/other_user}", "gists_url": "https://api.github.com/users/abdulelahsm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abdulelahsm", "id": 28743265, "login": "abdulelahsm", "node_id": "MDQ6VXNlcjI4NzQzMjY1", "organizations_url": "https://api.github.com/users/abdulelahsm/orgs", "received_events_url": "https://api.github.com/users/abdulelahsm/received_events", "repos_url": "https://api.github.com/users/abdulelahsm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abdulelahsm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abdulelahsm/subscriptions", "type": "User", "url": "https://api.github.com/users/abdulelahsm" }
[]
closed
false
null
[]
null
[ "Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review", "@lhoestq fixed it! ready to merge. I hope haha", "merging since the CI is fixed on master" ]
"2020-12-05T22:08:46Z"
"2020-12-21T09:53:51Z"
"2020-12-21T09:53:51Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1181.diff", "html_url": "https://github.com/huggingface/datasets/pull/1181", "merged_at": "2020-12-21T09:53:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1181.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1181" }
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1181/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1181/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6510/comments
https://api.github.com/repos/huggingface/datasets/issues/6510/events
https://github.com/huggingface/datasets/pull/6510
2,046,928,742
PR_kwDODunzps5iRyiV
6,510
Replace `list_files_info` with `list_repo_tree` in `push_to_hub`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "CI errors are unrelated to the changes, so I'm merging.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005161 / 0.011353 (-0.006192) | 0.003494 / 0.011008 (-0.007515) | 0.062601 / 0.038508 (0.024093) | 0.052876 / 0.023109 (0.029767) | 0.255595 / 0.275898 (-0.020303) | 0.283108 / 0.323480 (-0.040371) | 0.003856 / 0.007986 (-0.004130) | 0.002686 / 0.004328 (-0.001642) | 0.048604 / 0.004250 (0.044353) | 0.037886 / 0.037052 (0.000834) | 0.252902 / 0.258489 (-0.005587) | 0.286906 / 0.293841 (-0.006935) | 0.028570 / 0.128546 (-0.099976) | 0.010684 / 0.075646 (-0.064962) | 0.208154 / 0.419271 (-0.211118) | 0.036169 / 0.043533 (-0.007364) | 0.276026 / 0.255139 (0.020887) | 0.272274 / 0.283200 (-0.010925) | 0.017690 / 0.141683 (-0.123993) | 1.202400 / 1.452155 (-0.249755) | 1.231223 / 1.492716 (-0.261494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095229 / 0.018006 (0.077222) | 0.302205 / 0.000490 (0.301716) | 0.000226 / 0.000200 (0.000026) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018877 / 0.037411 (-0.018534) | 0.062286 / 0.014526 (0.047760) | 0.075191 / 0.176557 (-0.101366) | 0.121419 / 0.737135 (-0.615716) | 0.075641 / 0.296338 (-0.220697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282914 / 0.215209 (0.067705) | 2.769156 / 2.077655 (0.691501) | 1.480219 / 1.504120 (-0.023901) | 1.355742 / 1.541195 (-0.185453) | 1.399740 / 1.468490 (-0.068750) | 0.556365 / 4.584777 (-4.028412) | 2.399679 / 3.745712 (-1.346033) | 2.850510 / 5.269862 (-2.419351) | 1.781428 / 4.565676 (-2.784249) | 0.063045 / 0.424275 (-0.361230) | 0.004931 / 0.007607 (-0.002676) | 0.343743 / 0.226044 (0.117698) | 3.374907 / 2.268929 (1.105978) | 1.857774 / 55.444624 (-53.586851) | 1.577154 / 6.876477 (-5.299323) | 1.626597 / 2.142072 (-0.515475) | 0.653991 / 4.805227 (-4.151236) | 0.121306 / 6.500664 (-6.379358) | 0.042131 / 0.075469 (-0.033339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948826 / 1.841788 (-0.892962) | 11.922497 / 8.074308 (3.848188) | 10.592334 / 10.191392 (0.400942) | 0.129145 / 0.680424 (-0.551279) | 0.014652 / 0.534201 (-0.519549) | 0.286074 / 0.579283 (-0.293210) | 0.265338 / 0.434364 (-0.169026) | 0.346872 / 0.540337 (-0.193466) | 0.450480 / 1.386936 (-0.936456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005305 / 0.011353 (-0.006048) | 0.003583 / 0.011008 (-0.007426) | 0.049855 / 0.038508 (0.011347) | 0.052882 / 0.023109 (0.029773) | 0.268429 / 0.275898 (-0.007469) | 0.293375 / 0.323480 (-0.030105) | 0.004052 / 0.007986 (-0.003934) | 0.002685 / 0.004328 (-0.001644) | 0.049206 / 0.004250 (0.044955) | 0.040187 / 0.037052 (0.003135) | 0.270112 / 0.258489 (0.011623) | 0.306380 / 0.293841 (0.012539) | 0.029161 / 0.128546 (-0.099386) | 0.010948 / 0.075646 (-0.064698) | 0.057721 / 0.419271 (-0.361550) | 0.032628 / 0.043533 (-0.010905) | 0.267458 / 0.255139 (0.012319) | 0.291905 / 0.283200 (0.008705) | 0.018096 / 0.141683 (-0.123587) | 1.112744 / 1.452155 (-0.339410) | 1.161962 / 1.492716 (-0.330754) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097449 / 0.018006 (0.079443) | 0.304270 / 0.000490 (0.303780) | 0.000235 / 0.000200 (0.000035) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023550 / 0.037411 (-0.013861) | 0.078246 / 0.014526 (0.063720) | 0.091229 / 0.176557 (-0.085327) | 0.130624 / 0.737135 (-0.606511) | 0.092767 / 0.296338 (-0.203571) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284962 / 0.215209 (0.069753) | 2.761090 / 2.077655 (0.683435) | 1.545409 / 1.504120 (0.041289) | 1.424573 / 1.541195 (-0.116622) | 1.438869 / 1.468490 (-0.029621) | 0.571281 / 4.584777 (-4.013496) | 2.419493 / 3.745712 (-1.326219) | 2.802611 / 5.269862 (-2.467251) | 1.749880 / 4.565676 (-2.815796) | 0.062566 / 0.424275 (-0.361709) | 0.005243 / 0.007607 (-0.002364) | 0.344653 / 0.226044 (0.118608) | 3.367488 / 2.268929 (1.098559) | 1.925871 / 55.444624 (-53.518754) | 1.624258 / 6.876477 (-5.252219) | 1.663742 / 2.142072 (-0.478330) | 0.634553 / 4.805227 (-4.170675) | 0.116745 / 6.500664 (-6.383919) | 0.041734 / 0.075469 (-0.033735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006808 / 1.841788 (-0.834980) | 12.499711 / 8.074308 (4.425403) | 10.956260 / 10.191392 (0.764868) | 0.132393 / 0.680424 (-0.548031) | 0.015924 / 0.534201 (-0.518277) | 0.289837 / 0.579283 (-0.289446) | 0.281565 / 0.434364 (-0.152799) | 0.337393 / 0.540337 (-0.202945) | 0.560385 / 1.386936 (-0.826551) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f699ab27ef2c0c23dc3a514b5bb155485ff6913 \"CML watermark\")\n" ]
"2023-12-18T15:34:19Z"
"2023-12-18T15:39:22Z"
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6510.diff", "html_url": "https://github.com/huggingface/datasets/pull/6510", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6510.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6510" }
Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6510/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6510/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
https://api.github.com/repos/huggingface/datasets/issues/2070/events
https://github.com/huggingface/datasets/issues/2070
833,799,035
MDU6SXNzdWU4MzM3OTkwMzU=
2,070
ArrowInvalid issue for squad v2 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4", "events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}", "followers_url": "https://api.github.com/users/MichaelYxWang/followers", "following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichaelYxWang", "id": 29818977, "login": "MichaelYxWang", "node_id": "MDQ6VXNlcjI5ODE4OTc3", "organizations_url": "https://api.github.com/users/MichaelYxWang/orgs", "received_events_url": "https://api.github.com/users/MichaelYxWang/received_events", "repos_url": "https://api.github.com/users/MichaelYxWang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions", "type": "User", "url": "https://api.github.com/users/MichaelYxWang" }
[]
closed
false
null
[]
null
[ "Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`." ]
"2021-03-17T13:51:49Z"
"2021-08-04T17:57:16Z"
"2021-08-04T17:57:16Z"
NONE
null
null
null
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1821/comments
https://api.github.com/repos/huggingface/datasets/issues/1821/events
https://github.com/huggingface/datasets/issues/1821
801,747,647
MDU6SXNzdWU4MDE3NDc2NDc=
1,821
Provide better exception message when one of many files results in an exception
{ "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/david-waterworth", "id": 5028974, "login": "david-waterworth", "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "repos_url": "https://api.github.com/users/david-waterworth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "type": "User", "url": "https://api.github.com/users/david-waterworth" }
[]
closed
false
null
[]
null
[ "Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pandas.read_csv` by passing additional keyword arguments to `load_dataset`. For example, you may find useful this argument:\r\n- `error_bad_lines` : bool, default True\r\n Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will be dropped from the DataFrame that is returned.\r\n\r\nYou could try:\r\n```python\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files), error_bad_lines=False)\r\n```\r\n" ]
"2021-02-05T00:49:03Z"
"2021-02-09T17:39:27Z"
"2021-02-09T17:39:27Z"
NONE
null
null
null
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc). For example, this is the tail of an exception which I suspect is due to a stray comma. > File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read > File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory > File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows > File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows > File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error > pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3 It would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1821/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3460/comments
https://api.github.com/repos/huggingface/datasets/issues/3460/events
https://github.com/huggingface/datasets/pull/3460
1,085,002,469
PR_kwDODunzps4wFyCf
3,460
Don't encode lists as strings when using `Value("string")`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Should we close this PR?", "since the original issue has to do with metrics that have been moved to `evaludate` I think we can close this one", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2021-12-20T16:50:49Z"
"2023-09-25T10:28:30Z"
"2023-09-25T09:20:28Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3460.diff", "html_url": "https://github.com/huggingface/datasets/pull/3460", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3460" }
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3460/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1493/comments
https://api.github.com/repos/huggingface/datasets/issues/1493/events
https://github.com/huggingface/datasets/pull/1493
762,979,415
MDExOlB1bGxSZXF1ZXN0NTM3NDc0MDc1
1,493
Added RONEC dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4", "events_url": "https://api.github.com/users/iliemihai/events{/privacy}", "followers_url": "https://api.github.com/users/iliemihai/followers", "following_url": "https://api.github.com/users/iliemihai/following{/other_user}", "gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliemihai", "id": 2815308, "login": "iliemihai", "node_id": "MDQ6VXNlcjI4MTUzMDg=", "organizations_url": "https://api.github.com/users/iliemihai/orgs", "received_events_url": "https://api.github.com/users/iliemihai/received_events", "repos_url": "https://api.github.com/users/iliemihai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions", "type": "User", "url": "https://api.github.com/users/iliemihai" }
[]
closed
false
null
[]
null
[ "Thanks for the PR @iliemihai . \r\n\r\nFew comments - \r\n\r\nCan you run - \r\n`python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n\r\nAlso, before committing files run : \r\n`make style`\r\n`flake8 datasets`\r\nthen you can add and commit files.", "> Thanks for the PR @iliemihai .\r\n> \r\n> Few comments -\r\n> \r\n> Can you run -\r\n> `python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n> \r\n> Also, before committing files run :\r\n> `make style`\r\n> `flake8 datasets`\r\n> then you can add and commit files.\r\n\r\nSorry, forgot to generate dummy data. I will do it now :D", "Awesome, good job @iliemihai !\r\nI think the PR is ready to merge.\r\n@lhoestq would you mind double-checking this ?", "Had to regenerate the dummy data since I just found out they were empty files" ]
"2020-12-11T22:14:50Z"
"2020-12-21T14:48:56Z"
"2020-12-21T14:48:56Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1493.diff", "html_url": "https://github.com/huggingface/datasets/pull/1493", "merged_at": "2020-12-21T14:48:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1493.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1493" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1493/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1493/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4455/comments
https://api.github.com/repos/huggingface/datasets/issues/4455/events
https://github.com/huggingface/datasets/pull/4455
1,263,089,067
PR_kwDODunzps45O5F9
4,455
Update data URLs in fever dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-07T10:40:54Z"
"2022-06-08T07:24:54Z"
"2022-06-08T07:16:17Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4455.diff", "html_url": "https://github.com/huggingface/datasets/pull/4455", "merged_at": "2022-06-08T07:16:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4455.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4455" }
As stated in their website, data owners updated their URLs on 28/04/2022. This PR updates the data URLs. Fix #4452.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4455/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4455/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3748/comments
https://api.github.com/repos/huggingface/datasets/issues/3748/events
https://github.com/huggingface/datasets/pull/3748
1,142,128,763
PR_kwDODunzps4zCEyM
3,748
Add tqdm arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4", "events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}", "followers_url": "https://api.github.com/users/penguinwang96825/followers", "following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}", "gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/penguinwang96825", "id": 28087825, "login": "penguinwang96825", "node_id": "MDQ6VXNlcjI4MDg3ODI1", "organizations_url": "https://api.github.com/users/penguinwang96825/orgs", "received_events_url": "https://api.github.com/users/penguinwang96825/received_events", "repos_url": "https://api.github.com/users/penguinwang96825/repos", "site_admin": false, "starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions", "type": "User", "url": "https://api.github.com/users/penguinwang96825" }
[]
closed
false
null
[]
null
[]
"2022-02-18T00:47:55Z"
"2022-02-18T00:59:15Z"
"2022-02-18T00:59:15Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3748.diff", "html_url": "https://github.com/huggingface/datasets/pull/3748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3748" }
In this PR, there are two changes. 1. It is able to show the progress bar by adding the length of the iterator. 2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3748/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4719/comments
https://api.github.com/repos/huggingface/datasets/issues/4719/events
https://github.com/huggingface/datasets/issues/4719
1,309,854,492
I_kwDODunzps5OEssc
4,719
Issue loading TheNoob3131/mosquito-data dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/53668030?v=4", "events_url": "https://api.github.com/users/thenerd31/events{/privacy}", "followers_url": "https://api.github.com/users/thenerd31/followers", "following_url": "https://api.github.com/users/thenerd31/following{/other_user}", "gists_url": "https://api.github.com/users/thenerd31/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thenerd31", "id": 53668030, "login": "thenerd31", "node_id": "MDQ6VXNlcjUzNjY4MDMw", "organizations_url": "https://api.github.com/users/thenerd31/orgs", "received_events_url": "https://api.github.com/users/thenerd31/received_events", "repos_url": "https://api.github.com/users/thenerd31/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thenerd31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thenerd31/subscriptions", "type": "User", "url": "https://api.github.com/users/thenerd31" }
[]
closed
false
null
[]
null
[ "I am also getting a ValueError: 'Couldn't cast' at the bottom. Is this because of some delimiter issue? My dataset is on the Huggingface Hub. If you could look at it, that would be greatly appreciated.", "Hi @thenerd31, thanks for reporting.\r\n\r\nPlease note that your issue is not caused by the Hugging Face Datasets library, but it has to do with the specific implementation of your dataset on the Hub.\r\n\r\nTherefore, I'm transferring this discussion to your own dataset Community tab: https://huggingface.co/datasets/TheNoob3131/mosquito-data/discussions/1" ]
"2022-07-19T17:47:37Z"
"2022-07-20T06:46:57Z"
"2022-07-20T06:46:02Z"
NONE
null
null
null
![image](https://user-images.githubusercontent.com/53668030/179815591-d75fa7d3-3122-485f-a852-b06a68909066.png) So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank. Here is the error below: ValueError Traceback (most recent call last) Input In [8], in <cell line: 3>() 1 from datasets import load_dataset ----> 3 dataset = load_dataset("TheNoob3131/mosquito-data", split="train") File ~\Anaconda3\lib\site-packages\datasets\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1678 # Download and prepare data -> 1679 builder_instance.download_and_prepare( 1680 download_config=download_config, 1681 download_mode=download_mode, 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, 1684 use_auth_token=use_auth_token, 1685 ) 1687 # Build dataset for splits 1688 keep_in_memory = ( 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1690 ) Is the dataset in the wrong format or is there some security permission that I should enable?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4719/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5235/comments
https://api.github.com/repos/huggingface/datasets/issues/5235/events
https://github.com/huggingface/datasets/pull/5235
1,448,052,660
PR_kwDODunzps5C1pjc
5,235
Pin `typer` version in tests to <0.5 to fix Windows CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[]
"2022-11-14T13:17:02Z"
"2022-11-14T15:43:01Z"
"2022-11-14T13:41:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5235.diff", "html_url": "https://github.com/huggingface/datasets/pull/5235", "merged_at": "2022-11-14T13:41:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5235.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5235" }
Otherwise `click` fails on Windows: ``` Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\__main__.py", line 4, in <module> setup_cli() File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\cli\_util.py", line 71, in setup_cli command(prog_name=COMMAND) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\click\core.py", line 829, in __call__ return self.main(*args, **kwargs) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 785, in main **extra, File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 190, in _main args = click.utils._expand_args(args) AttributeError: module 'click.utils' has no attribute '_expand_args' ``` See https://github.com/tiangolo/typer/issues/427
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5235/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6086/comments
https://api.github.com/repos/huggingface/datasets/issues/6086/events
https://github.com/huggingface/datasets/issues/6086
1,825,009,268
I_kwDODunzps5sx250
6,086
Support `fsspec` in `Dataset.to_<format>` methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" } ]
null
[ "Hi @mariosasko unless someone's already working on it, I guess I can tackle it!", "Hi! Sure, feel free to tackle this.", "#self-assign", "I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.DataFrame` and `to_sql` just inserts into a SQL DB, is that right?" ]
"2023-07-27T19:08:37Z"
"2023-07-28T15:28:26Z"
null
CONTRIBUTOR
null
null
null
Supporting this should be fairly easy. Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353).
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6086/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4640/comments
https://api.github.com/repos/huggingface/datasets/issues/4640/events
https://github.com/huggingface/datasets/pull/4640
1,295,495,699
PR_kwDODunzps4660rI
4,640
Support all split in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4640). All of your documentation changes will be reflected on that endpoint." ]
"2022-07-06T08:56:38Z"
"2022-07-06T15:19:55Z"
null
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4640.diff", "html_url": "https://github.com/huggingface/datasets/pull/4640", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4640.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4640" }
Fix #4637.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4640/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5464/comments
https://api.github.com/repos/huggingface/datasets/issues/5464/events
https://github.com/huggingface/datasets/issues/5464
1,557,462,104
I_kwDODunzps5c1PxY
5,464
NonMatchingChecksumError for hendrycks_test
{ "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sarahwie", "id": 8027676, "login": "sarahwie", "node_id": "MDQ6VXNlcjgwMjc2NzY=", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "repos_url": "https://api.github.com/users/sarahwie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "type": "User", "url": "https://api.github.com/users/sarahwie" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```", "Oops, missed that I needed to upgrade. Thanks!" ]
"2023-01-26T00:43:23Z"
"2023-01-27T05:44:31Z"
"2023-01-26T07:41:58Z"
NONE
null
null
null
### Describe the bug The checksum of the file has likely changed on the remote host. ### Steps to reproduce the bug `dataset = nlp.load_dataset("hendrycks_test", "anatomy")` ### Expected behavior no error thrown ### Environment info - `datasets` version: 2.2.1 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5464/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6472/comments
https://api.github.com/repos/huggingface/datasets/issues/6472/events
https://github.com/huggingface/datasets/issues/6472
2,026,493,439
I_kwDODunzps54ydX_
6,472
CI quality is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-12-05T15:35:34Z"
"2023-12-06T08:17:34Z"
"2023-12-05T18:08:43Z"
MEMBER
null
null
null
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6472/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3531/comments
https://api.github.com/repos/huggingface/datasets/issues/3531/events
https://github.com/huggingface/datasets/issues/3531
1,094,033,280
I_kwDODunzps5BNZ-A
3,531
Give clearer instructions to add the YAML tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2022-01-05T06:44:20Z"
"2022-01-17T15:54:36Z"
"2022-01-17T15:54:36Z"
MEMBER
null
null
null
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints in the README template.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3531/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4271/comments
https://api.github.com/repos/huggingface/datasets/issues/4271/events
https://github.com/huggingface/datasets/issues/4271
1,224,404,403
I_kwDODunzps5I-u2z
4,271
A typo in docs of datasets.disable_progress_bar
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" } ]
null
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
"2022-05-03T17:44:56Z"
"2022-05-04T06:58:35Z"
"2022-05-04T06:58:35Z"
NONE
null
null
null
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4271/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6472/comments
https://api.github.com/repos/huggingface/datasets/issues/6472/events
https://github.com/huggingface/datasets/issues/6472
2,026,493,439
I_kwDODunzps54ydX_
6,472
CI quality is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2023-12-05T15:35:34Z"
"2023-12-06T08:17:34Z"
"2023-12-05T18:08:43Z"
MEMBER
null
null
null
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6472/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4268/comments
https://api.github.com/repos/huggingface/datasets/issues/4268/events
https://github.com/huggingface/datasets/issues/4268
1,223,331,964
I_kwDODunzps5I6pB8
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɜːd/\r\n([General American](https://en.wikipedia.org/wiki/General_American)) [enPR](https://en.wiktionary.org/wiki/Appendix:English_pronunciation): wûrd, [IPA](https://en.wiktionary.org/wiki/Wiktionary:International_Phonetic_Alphabet)([key](https://en.wiktionary.org/wiki/Appendix:English_pronunciation)): /wɝd/", "Hi @i-am-neo, thanks for reporting.\r\n\r\nNormally this dataset should be private and not accessible for public use. @cakiki, @lvwerra, any reason why is it public? I see many other Wikimedia datasets are also public.\r\n\r\nAlso note that last commit \"Add metadata\" (https://huggingface.co/datasets/bigscience-catalogue-lm-data/lm_en_wiktionary_filtered/commit/dc2f458dab50e00f35c94efb3cd4009996858609) introduced buggy data files (`data/file-01.jsonl.gz.lock`, `data/file-01.jsonl.gz.lock.lock`). The same bug appears in other datasets as well.\r\n\r\n@i-am-neo, please note that in the near future we are planning to make public all datasets used for the BigScience project (at least all of them whose license allows to do that). Once public, they will be accessible for all the NLP community.", "Ah this must be a bug introduced at creation time since the repos were created programmatically; I'll go ahead and make them private; sorry about that!", "All datasets are private now. \r\n\r\nRe:that bug I think we're currently avoiding it by avoiding verifications. (i.e. `ignore_verifications=True`)", "Thanks a lot, @cakiki.\r\n\r\n@i-am-neo, I'm closing this issue for now because the dataset is not publicly available yet. Just stay tuned, as we will soon release all the BigScience open-license datasets. ", "Thanks for letting me know, @albertvillanova @cakiki.\r\nAny chance of having a subset alpha version in the meantime? \r\nI only need two dicts out of wiktionary: 1) phoneme(as key): word, and 2) word(as key): its phonemes.\r\n\r\nWould like to use it for a mini-poc [Robust ASR](https://github.com/huggingface/transformers/issues/13162#issuecomment-1096881290) decoding, cc @patrickvonplaten. \r\n\r\n(Patrick, possible to email you so as not to litter github with comments? I have some observations after experiments training hubert on some YT AMI-like data (11.44% wer). Also wonder if a robust ASR is on your/HG's roadmap). Thanks!", "Hey @i-am-neo,\r\n\r\nCool to hear that you're working on Robust ASR! Feel free to drop me a mail :-)", "@i-am-neo This particular subset of the dataset was taken from the [CirrusSearch dumps](https://dumps.wikimedia.org/other/cirrussearch/current/)\r\nYou're specifically after the [enwiktionary-20220425-cirrussearch-content.json.gz](https://dumps.wikimedia.org/other/cirrussearch/current/enwiktionary-20220425-cirrussearch-content.json.gz) file", "thanks @cakiki ! <del>I could access the gz file yesterday (but neglected to tuck it away somewhere safe), and today the link is throwing a 404. Can you help? </del> Never mind, got it!", "thanks @patrickvonplaten. will do - getting my observations together." ]
"2022-05-02T20:34:25Z"
"2022-05-06T15:53:30Z"
"2022-05-03T11:23:48Z"
NONE
null
null
null
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4268/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/586/comments
https://api.github.com/repos/huggingface/datasets/issues/586/events
https://github.com/huggingface/datasets/pull/586
695,237,999
MDExOlB1bGxSZXF1ZXN0NDgxNTA5MzU1
586
Better message when data files is empty
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-09-07T15:59:57Z"
"2020-09-09T09:00:09Z"
"2020-09-09T09:00:08Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/586.diff", "html_url": "https://github.com/huggingface/datasets/pull/586", "merged_at": "2020-09-09T09:00:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/586.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/586" }
Fix #581
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/586/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/586/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/16
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/16/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/16/comments
https://api.github.com/repos/huggingface/datasets/issues/16/events
https://github.com/huggingface/datasets/pull/16
605,661,462
MDExOlB1bGxSZXF1ZXN0NDA4MDIyMTUz
16
create our own DownloadManager
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Looks great to me! ", "The new download manager is ready. I removed the old folder and I fixed a few remaining dependencies.\r\nI tested it on squad and a few others from the dataset folder and it works fine.\r\n\r\nThe only impact of these changes is that it breaks the `download_and_prepare` script that was used to register the checksums when we create a dataset, as the checksum logic is not implemented.\r\n\r\nLet me know if you have remarks", "Ok merged it (a bit fast for you to update the copyright, now I see that. but it's ok, we'll do a pass on these doc/copyright before releasing anyway)", "Actually two additional things here @lhoestq (I merged too fast sorry, let's make a new PR for additional developments):\r\n- I think we can remove some dependencies now (e.g. `promises`) in setup.py, can you have a look?\r\n- also, I think we can remove the boto3 dependency like here: https://github.com/huggingface/transformers/pull/3968" ]
"2020-04-23T16:08:07Z"
"2021-05-05T18:25:24Z"
"2020-04-25T21:25:10Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/16.diff", "html_url": "https://github.com/huggingface/datasets/pull/16", "merged_at": "2020-04-25T21:25:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/16.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/16" }
I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution. With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine. For the implementation, what I did exactly: - I copied the old download manager - I removed all the dependences to the old `download` files - I replaced all the download + extract calls by calls to `cached_path` - I removed unused parameters (extract_dir, compute_stats) (maybe compute_stats could be re-added later if we want to compute stats...) - I left some functions unimplemented for now. We will probably have to implement them because they are used by some datasets scripts (download_kaggle_data, iter_archive) or because we may need them at some point (download_checksums, _record_sizes_checksums) Let me know if you think that this is going the right direction or if you have remarks. Note: I didn't write any test yet as I wanted to read your remarks first
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/16/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/16/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3128/comments
https://api.github.com/repos/huggingface/datasets/issues/3128/events
https://github.com/huggingface/datasets/issues/3128
1,032,201,870
I_kwDODunzps49hiaO
3,128
Support Audio feature for TAR archives in sequential access
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2021-10-21T08:23:01Z"
"2021-11-17T17:42:07Z"
"2021-11-17T17:42:07Z"
MEMBER
null
null
null
Currently, Audio feature accesses each audio file by their file path. However, streamed TAR archive files do not allow random access to their archived files. Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3128/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6313/comments
https://api.github.com/repos/huggingface/datasets/issues/6313/events
https://github.com/huggingface/datasets/pull/6313
1,951,527,712
PR_kwDODunzps5dPGmL
6,313
Fix commit message formatting in multi-commit uploads
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006760 / 0.011353 (-0.004593) | 0.003918 / 0.011008 (-0.007091) | 0.084016 / 0.038508 (0.045508) | 0.069927 / 0.023109 (0.046818) | 0.307898 / 0.275898 (0.032000) | 0.337453 / 0.323480 (0.013973) | 0.004132 / 0.007986 (-0.003854) | 0.003248 / 0.004328 (-0.001081) | 0.064526 / 0.004250 (0.060275) | 0.056424 / 0.037052 (0.019371) | 0.316313 / 0.258489 (0.057824) | 0.356302 / 0.293841 (0.062461) | 0.030634 / 0.128546 (-0.097912) | 0.008467 / 0.075646 (-0.067180) | 0.286676 / 0.419271 (-0.132595) | 0.051813 / 0.043533 (0.008280) | 0.309874 / 0.255139 (0.054735) | 0.332513 / 0.283200 (0.049313) | 0.023919 / 0.141683 (-0.117764) | 1.509033 / 1.452155 (0.056878) | 1.549636 / 1.492716 (0.056920) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221464 / 0.018006 (0.203458) | 0.447873 / 0.000490 (0.447384) | 0.002408 / 0.000200 (0.002208) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027634 / 0.037411 (-0.009777) | 0.081802 / 0.014526 (0.067276) | 0.781489 / 0.176557 (0.604933) | 0.165184 / 0.737135 (-0.571951) | 0.121526 / 0.296338 (-0.174813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408215 / 0.215209 (0.193006) | 4.091192 / 2.077655 (2.013538) | 2.062608 / 1.504120 (0.558488) | 1.895747 / 1.541195 (0.354552) | 1.873682 / 1.468490 (0.405192) | 0.484184 / 4.584777 (-4.100593) | 3.469096 / 3.745712 (-0.276616) | 3.365325 / 5.269862 (-1.904537) | 2.000333 / 4.565676 (-2.565343) | 0.056661 / 0.424275 (-0.367614) | 0.007100 / 0.007607 (-0.000507) | 0.478587 / 0.226044 (0.252542) | 4.768703 / 2.268929 (2.499774) | 2.472432 / 55.444624 (-52.972192) | 2.133611 / 6.876477 (-4.742865) | 2.154296 / 2.142072 (0.012223) | 0.582293 / 4.805227 (-4.222934) | 0.131932 / 6.500664 (-6.368732) | 0.060259 / 0.075469 (-0.015211) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259167 / 1.841788 (-0.582620) | 18.465604 / 8.074308 (10.391296) | 14.024528 / 10.191392 (3.833136) | 0.162320 / 0.680424 (-0.518104) | 0.018144 / 0.534201 (-0.516057) | 0.389931 / 0.579283 (-0.189352) | 0.396456 / 0.434364 (-0.037908) | 0.454734 / 0.540337 (-0.085603) | 0.636406 / 1.386936 (-0.750530) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006565 / 0.011353 (-0.004788) | 0.004008 / 0.011008 (-0.007000) | 0.064526 / 0.038508 (0.026018) | 0.071963 / 0.023109 (0.048854) | 0.415456 / 0.275898 (0.139557) | 0.441199 / 0.323480 (0.117719) | 0.005619 / 0.007986 (-0.002366) | 0.003261 / 0.004328 (-0.001067) | 0.064817 / 0.004250 (0.060567) | 0.055349 / 0.037052 (0.018296) | 0.425172 / 0.258489 (0.166683) | 0.452629 / 0.293841 (0.158788) | 0.031676 / 0.128546 (-0.096870) | 0.008432 / 0.075646 (-0.067214) | 0.071752 / 0.419271 (-0.347519) | 0.047176 / 0.043533 (0.003643) | 0.408641 / 0.255139 (0.153502) | 0.428579 / 0.283200 (0.145380) | 0.021548 / 0.141683 (-0.120135) | 1.495153 / 1.452155 (0.042999) | 1.557933 / 1.492716 (0.065217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212749 / 0.018006 (0.194743) | 0.441263 / 0.000490 (0.440773) | 0.005831 / 0.000200 (0.005631) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031844 / 0.037411 (-0.005567) | 0.091590 / 0.014526 (0.077064) | 0.102859 / 0.176557 (-0.073697) | 0.155859 / 0.737135 (-0.581276) | 0.104717 / 0.296338 (-0.191622) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425924 / 0.215209 (0.210715) | 4.292829 / 2.077655 (2.215174) | 2.314350 / 1.504120 (0.810230) | 2.163087 / 1.541195 (0.621892) | 2.217310 / 1.468490 (0.748820) | 0.490889 / 4.584777 (-4.093887) | 3.498287 / 3.745712 (-0.247425) | 3.224980 / 5.269862 (-2.044881) | 1.987739 / 4.565676 (-2.577938) | 0.057486 / 0.424275 (-0.366790) | 0.007199 / 0.007607 (-0.000408) | 0.501194 / 0.226044 (0.275149) | 5.015202 / 2.268929 (2.746273) | 2.816307 / 55.444624 (-52.628318) | 2.474593 / 6.876477 (-4.401884) | 2.649510 / 2.142072 (0.507437) | 0.597167 / 4.805227 (-4.208060) | 0.131199 / 6.500664 (-6.369465) | 0.059532 / 0.075469 (-0.015938) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.384053 / 1.841788 (-0.457734) | 18.964201 / 8.074308 (10.889893) | 14.336209 / 10.191392 (4.144817) | 0.187522 / 0.680424 (-0.492902) | 0.020201 / 0.534201 (-0.514000) | 0.394778 / 0.579283 (-0.184505) | 0.408393 / 0.434364 (-0.025971) | 0.470965 / 0.540337 (-0.069373) | 0.667974 / 1.386936 (-0.718962) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b3333d790800ddaa3bf386ee71dc800258c921c \"CML watermark\")\n" ]
"2023-10-19T07:53:56Z"
"2023-10-20T14:06:13Z"
"2023-10-20T13:57:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6313.diff", "html_url": "https://github.com/huggingface/datasets/pull/6313", "merged_at": "2023-10-20T13:57:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6313.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6313" }
Currently, the commit message keeps on adding: - `Upload dataset (part 00000-of-00002)` - `Upload dataset (part 00000-of-00002) (part 00001-of-00002)` Introduced in https://github.com/huggingface/datasets/pull/6269 This PR fixes this issue to have - `Upload dataset (part 00000-of-00002)` - `Upload dataset (part 00001-of-00002)`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6313/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2319/comments
https://api.github.com/repos/huggingface/datasets/issues/2319/events
https://github.com/huggingface/datasets/issues/2319
876,251,376
MDU6SXNzdWU4NzYyNTEzNzY=
2,319
UnicodeDecodeError for OSCAR (Afrikaans)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgraaf", "id": 8904453, "login": "sgraaf", "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "repos_url": "https://api.github.com/users/sgraaf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "type": "User", "url": "https://api.github.com/users/sgraaf" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.", "Awesome, thank you. 😃 ", "@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`." ]
"2021-05-05T09:22:52Z"
"2021-05-05T10:57:31Z"
"2021-05-05T10:50:55Z"
NONE
null
null
null
## Describe the bug When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("oscar", "unshuffled_deduplicated_af") ``` ## Expected results Anything but an error, really. ## Actual results ```python >>> from datasets import load_dataset >>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af") Downloading: 14.7kB [00:00, 4.91MB/s] Downloading: 3.07MB [00:00, 32.6MB/s] Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset builder_instance.download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare self._download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split for key, record in utils.tqdm( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples for line in f: File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined> ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` - Datasets: 1.6.2 - Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] - Platform: Windows-10-10.0.19041-SP0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2319/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4641/comments
https://api.github.com/repos/huggingface/datasets/issues/4641/events
https://github.com/huggingface/datasets/issues/4641
1,295,633,250
I_kwDODunzps5NOcti
4,641
Dataset Viewer issue for kmfoda/booksum
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books/27681-chapters/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries/cliffnotes/The Last of the Mohicans/section_1_part_0.txt',\r\n 'book_id': 'The Last of the Mohicans.chapters 1-2',\r\n 'summary_id': 'chapters 1-2',\r\n 'content': None,\r\n 'summary': '{\"name\": \"Chapters 1-2\", \"url\": \"https://web.archive.org/web/20201101053205/https://www.cliffsnotes.com/literature/l/the-last-of-the-mohicans/summary-and-analysis/chapters-12\", \"summary\": \"Before any characters appear, the time and geography are made clear. Though it is the last war that England and France waged for a country that neither would retain, the wilderness between the forces still has to be...\r\n```\r\n\r\nI'm forcing the refresh of the preview. ", "The preview appears as expected once the refresh forced.", "Thank you @albertvillanova 🤗 !" ]
"2022-07-06T10:38:16Z"
"2022-07-06T13:25:28Z"
"2022-07-06T11:58:06Z"
MEMBER
null
null
null
### Link https://huggingface.co/datasets/kmfoda/booksum ### Description A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to: ``` Status code: 400 Exception: ClientResponseError Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/kmfoda/booksum/resolve/47953f583d6967f086cb16a2f4d2346e9834024d/test.csv') ``` I'm not sure why it says "Unauthorized" since it's just a bunch of CSV files in a repo ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4641/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
"2020-11-22T08:09:36Z"
"2020-11-27T13:56:42Z"
"2020-11-27T13:56:42Z"
CONTRIBUTOR
null
null
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4333/comments
https://api.github.com/repos/huggingface/datasets/issues/4333/events
https://github.com/huggingface/datasets/pull/4333
1,234,038,705
PR_kwDODunzps43uSuj
4,333
Adding eval metadata for Banking 77
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)" ]
"2022-05-12T14:05:05Z"
"2022-05-12T21:03:32Z"
"2022-05-12T21:03:31Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4333.diff", "html_url": "https://github.com/huggingface/datasets/pull/4333", "merged_at": "2022-05-12T21:03:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4333.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4333" }
Adding eval metadata for Banking 77
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4333/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2116/comments
https://api.github.com/repos/huggingface/datasets/issues/2116/events
https://github.com/huggingface/datasets/issues/2116
841,481,292
MDU6SXNzdWU4NDE0ODEyOTI=
2,116
Creating custom dataset results in error while calling the map() function
{ "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GeetDsa", "id": 13940397, "login": "GeetDsa", "node_id": "MDQ6VXNlcjEzOTQwMzk3", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "repos_url": "https://api.github.com/users/GeetDsa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "type": "User", "url": "https://api.github.com/users/GeetDsa" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe." ]
"2021-03-26T00:37:46Z"
"2021-03-31T14:30:32Z"
"2021-03-31T14:30:32Z"
NONE
null
null
null
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the total number of samples" return len(self.samples) def __getitem__(self, index): "Generates one sample of data" # Select sample # Load data and get label samples = self.samples[index] return samples def preprocess_function_train(examples): inputs = examples labels = [example+tokenizer.eos_token for example in examples ] inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True) labels = tokenizer(labels, max_length=30, padding=True, truncation=True) model_inputs = inputs model_inputs["labels"] = labels["input_ids"] print("about to return") return model_inputs ##train["sentence"] is dataframe column train_dataset = MyDataset(train['sentence'].values.tolist()) train_dataset = train_dataset.map( preprocess_function, batched = True, batch_size=32 ) ``` Stack trace of error: ``` Traceback (most recent call last): File "dir/train_generate.py", line 362, in <module> main() File "dir/train_generate.py", line 245, in main train_dataset = train_dataset.map( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map return self._map_single( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper unformatted_columns = set(self.column_names) - set(self._format_columns or []) File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names return self._data.column_names AttributeError: 'MyDataset' object has no attribute '_data' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2116/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4281/comments
https://api.github.com/repos/huggingface/datasets/issues/4281/events
https://github.com/huggingface/datasets/pull/4281
1,225,556,939
PR_kwDODunzps43TNBm
4,281
Remove a copy-paste sentence in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The non-passing tests have nothing to do with this PR." ]
"2022-05-04T15:41:55Z"
"2022-05-06T08:38:03Z"
"2022-05-04T18:33:16Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4281.diff", "html_url": "https://github.com/huggingface/datasets/pull/4281", "merged_at": "2022-05-04T18:33:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/4281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4281" }
Remove the following copy-paste sentence from dataset cards: ``` We show detailed information for up to 5 configurations of the dataset. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4281/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3066/comments
https://api.github.com/repos/huggingface/datasets/issues/3066/events
https://github.com/huggingface/datasets/pull/3066
1,024,005,311
PR_kwDODunzps4tFObl
3,066
Add iter_archive
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-10-12T16:17:16Z"
"2022-09-21T14:10:10Z"
"2021-10-18T09:12:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3066.diff", "html_url": "https://github.com/huggingface/datasets/pull/3066", "merged_at": "2021-10-18T09:12:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3066.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3066" }
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio dataset using TAR archives can be updated to use `iter_archive` in order to be streamable :) cc @severo Fix #2829.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3066/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3066/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/498/comments
https://api.github.com/repos/huggingface/datasets/issues/498/events
https://github.com/huggingface/datasets/pull/498
677,597,479
MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy
498
dont use beam fs to save info for local cache dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-08-12T11:00:00Z"
"2020-08-14T13:17:21Z"
"2020-08-14T13:17:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/498.diff", "html_url": "https://github.com/huggingface/datasets/pull/498", "merged_at": "2020-08-14T13:17:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/498.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/498" }
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/498/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2132/comments
https://api.github.com/repos/huggingface/datasets/issues/2132/events
https://github.com/huggingface/datasets/issues/2132
843,142,822
MDU6SXNzdWU4NDMxNDI4MjI=
2,132
TydiQA dataset is mixed and is not split per language
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
open
false
null
[]
null
[ "You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\r\n```", "Hi\nthank you very much for the great response, this will be really wonderful\nto have one configuration per language, as one need the dataset in majority\nof case per language for cross-lingual evaluations.\nThis becomes also then more close to TFDS format, which is separated per\nlanguage https://www.tensorflow.org/datasets/catalog/tydi_qa which will be\nreally awesome to have.\nthanks\n\nOn Mon, Mar 29, 2021 at 6:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> You can filter the languages this way:\n>\n> tydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\n>\n> Otherwise maybe we can have one configuration per language ?\n> What do you think of this for example ?\n>\n> load_dataset(\"tydiqa\", \"primary_task.en\")\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2132#issuecomment-809516799>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXPW2PWSQ2RHG73O7TTGCY4LANCNFSM4Z7ER7IA>\n> .\n>\n", "@lhoestq I greatly appreciate any updates on this. thanks a lot" ]
"2021-03-29T08:56:21Z"
"2021-04-04T09:57:15Z"
null
NONE
null
null
null
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this. Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2132/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1158/comments
https://api.github.com/repos/huggingface/datasets/issues/1158/events
https://github.com/huggingface/datasets/pull/1158
757,658,926
MDExOlB1bGxSZXF1ZXN0NTMzMDAxMjM0
1,158
Add BBC Hindi NLI Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "events_url": "https://api.github.com/users/avinsit123/events{/privacy}", "followers_url": "https://api.github.com/users/avinsit123/followers", "following_url": "https://api.github.com/users/avinsit123/following{/other_user}", "gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avinsit123", "id": 33565881, "login": "avinsit123", "node_id": "MDQ6VXNlcjMzNTY1ODgx", "organizations_url": "https://api.github.com/users/avinsit123/orgs", "received_events_url": "https://api.github.com/users/avinsit123/received_events", "repos_url": "https://api.github.com/users/avinsit123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions", "type": "User", "url": "https://api.github.com/users/avinsit123" }
[]
closed
false
null
[]
null
[ "Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ", "Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help", "@lhoestq sorry I completely forgot about this pr. I will complete it ASAP.", "@lhoestq I have fixed the code to resolve all your comments. Pls do check. I also don't seem to know why the CI tests are failing as I ran all the tests in CONTRIBUTING.md on my local pc and they passed.", "@lhoestq thanks for ur patient review :) . I also wish to add similar 3 more NLI hindi datasets. Hope to do within this week.", "@lhoestq would this be merged to master?", "Yes of course ;)\r\nmerging now !" ]
"2020-12-05T11:25:34Z"
"2021-02-05T09:48:31Z"
"2021-02-05T09:48:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1158.diff", "html_url": "https://github.com/huggingface/datasets/pull/1158", "merged_at": "2021-02-05T09:48:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1158.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1158" }
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71" - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. [More Information Needed] ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'} ``` ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/avinsit123/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to [email protected]. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1158/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3236/comments
https://api.github.com/repos/huggingface/datasets/issues/3236/events
https://github.com/huggingface/datasets/issues/3236
1,048,026,358
I_kwDODunzps4-d5z2
3,236
Loading of datasets changed in #3110 returns no examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eladsegal", "id": 13485709, "login": "eladsegal", "node_id": "MDQ6VXNlcjEzNDg1NzA5", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "repos_url": "https://api.github.com/users/eladsegal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "type": "User", "url": "https://api.github.com/users/eladsegal" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, post-processed: Unknown size, total: 44.99 MiB) to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8...\r\nDataset qasper downloaded and prepared to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8. Subsequent calls will reuse this data.\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 888\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 281\r\n })\r\n})\r\n``` \r\n\r\nThis makes me suspect that the origin of the problem might be the cache: I didn't have this dataset in my cache, although I guess you already had it, before the code change introduced by #3110.\r\n\r\n@lhoestq might it be possible that the code change introduced by #3110 makes \"inaccessible\" all previously cached TAR-based datasets?\r\n- Before the caching system downloaded and extracted the tar dataset\r\n- Now it only downloads the tar dataset (no extraction is done)", "I can't reproduce either in my environment (macos, python 3.7).\r\n\r\nIn your case it generates zero examples. This can only happen if the extraction of the TAR archive doesn't output the right filenames. Indeed if the `qasper` script can't find the right file to load, it's currently ignored and it returns zero examples. This case was not even considered when #3110 was developed since we considered the file names to be deterministic - and not depend on your environment.\r\n\r\nTherefore here is my hypothesis:\r\n- either the cache is corrupted somehow with an empty TAR archive\r\n- OR I suspect that the issue comes from python 3.8\r\n", "I just tried again on python 3.8 and I was able to reproduce the issue. Let me work on a fix", "Ok I found the issue. It's not related to python 3.8 in itself though. This issue happens because your local installation of `datasets` is outdated compared to the changes to datasets in #3110\r\n\r\nTo fix this you just have to pull the latest changes from `master` :)\r\n\r\nLet me know if that helps !\r\n\r\n--------------\r\n\r\nHere are more details about my investigation:\r\n\r\nIt's possible to reproduce this issue if you use `datasets<=1.15.1` or before b6469baa22c174b3906c631802a7016fedea6780 and if you load the dataset after revision b6469baa22c174b3906c631802a7016fedea6780. This is because `dl_manager.iter_archive` had issues at that time (and it was not used anywhere anyway).\r\n\r\nIn particular it was returning the absolute path to extracted files instead of the relative path of the file inside the archive. This was an issue because `dl_manager.iter_archive` isn't supposed to extract the TAR archive. Instead, it iterates over all the files inside the archive, without creating a directory with the extracted content.\r\n\r\nTherefore if you want to use the datasets on `master`, make sure that you have an up-to-date local installation of `datasets` as well, or you may face incompatibilities like this.", "Thanks!\r\nBut what about code that is already using older version of datasets? \r\nThe reason I encountered this issue was that suddenly one of my repos with version 1.12.1 started getting 0 examples.\r\nI handled it by adding `revision` to `load_dataset`, but I guess it would still be an issue for other users who doesn't know this.", "Hi, in 1.12.1 it uses the dataset scripts from that time, not the one on master.\r\n\r\nIt only uses the datasets from master if you installed `datasets` from source, or if the dataset isn't available in your local version (in this case it shows a warning and it loads from master).\r\n", "OK, I understand the issue a bit better now.\r\nI see I wasn't on 1.12.1, but on 1.12.1.dev0 and since it is a dev version it uses master.\r\nSo users that use an old dev version must specify revision or else they'll encounter this problem.\r\n\r\nBTW, when I opened the issue I installed the latest master version with\r\n```\r\npip install git+git://github.com/huggingface/datasets@master#egg=datasets\r\n```\r\nand also used `download_mode=\"force_redownload\"`, and it still returned 0 examples.\r\nNow I deleted all of the cache and ran the code again, and it worked.\r\nI'm not sure what exactly happened here, but looks like it was due to a mix of an unofficial version and its cache.\r\n\r\nThanks again!" ]
"2021-11-08T23:29:46Z"
"2021-11-09T16:46:05Z"
"2021-11-09T16:45:47Z"
CONTRIBUTOR
null
null
null
## Describe the bug Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) }) ``` ## Steps to reproduce the bug Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper") # The problem only started with the commit of #3110 load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780") ``` ## Expected results ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 888 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 281 }) }) ``` Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d") ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.2.dev0 (master) - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3236/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3740/comments
https://api.github.com/repos/huggingface/datasets/issues/3740/events
https://github.com/huggingface/datasets/pull/3740
1,140,720,739
PR_kwDODunzps4y9XAP
3,740
Support streaming for pubmed
{ "avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4", "events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}", "followers_url": "https://api.github.com/users/abhi-mosaic/followers", "following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}", "gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhi-mosaic", "id": 77638579, "login": "abhi-mosaic", "node_id": "MDQ6VXNlcjc3NjM4NTc5", "organizations_url": "https://api.github.com/users/abhi-mosaic/orgs", "received_events_url": "https://api.github.com/users/abhi-mosaic/received_events", "repos_url": "https://api.github.com/users/abhi-mosaic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions", "type": "User", "url": "https://api.github.com/users/abhi-mosaic" }
[]
closed
false
null
[]
null
[ "@albertvillanova just FYI, since you were so helpful with the previous pubmed issue :) ", "IIRC streaming from FTP is not fully tested yet, so I'm fine with switching to HTTPS for now, as long as the download speed/availability is great", "@albertvillanova Thanks for pointing me to the `ET` module replacement. It should look a lot cleaner now.\r\n\r\nUnfortunately I tried keeping the `ftp://` protocol but was seeing timeout errors? in streaming mode (below). I think the `https://` performance is not an issue, when I was profiling the `open(..) -> f.read() -> etree.fromstring(xml_str)` codepath, most of the time was spent in the XML parsing rather than the data download.\r\n\r\n\r\nError when using `ftp://`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 301, in _fetch_range\r\n self.fs.ftp.retrbinary(\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 430, in retrbinary\r\n callback(data)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 293, in callback\r\n raise TransferDone\r\nfsspec.implementations.ftp.TransferDone\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test_pubmed_streaming.py\", line 9, in <module>\r\n print (next(iter(pubmed_train_streaming)))\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 365, in __iter__\r\n for key, example in self._iter():\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 362, in _iter\r\n yield from ex_iterable\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 79, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/af552ed918e2841e8427203530e3cfed3a8bc3213041d7853bea1ca67eec683d/pubmed.py\", line 362, in _generate_examples\r\n tree = ET.parse(filename)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/streaming.py\", line 65, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 636, in xet_parse\r\n return ET.parse(f, parser=parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 1202, in parse\r\n tree.parse(source, parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 595, in parse\r\n self._root = parser._parse_whole(source)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 293, in read_with_retries\r\n out = read(*args, **kwargs)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 292, in read\r\n return self._buffer.read(size)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/_compression.py\", line 68, in readinto\r\n data = self.read(len(byte_view))\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 479, in read\r\n if not self._read_gzip_header():\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 422, in _read_gzip_header\r\n magic = self._fp.read(2)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 96, in read\r\n self.file.read(size-self._length+read)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/spec.py\", line 1485, in read\r\n out = self.cache._fetch(self.loc, self.loc + length)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/caching.py\", line 153, in _fetch\r\n self.cache = self.fetcher(start, end) # new block replaces old\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 311, in _fetch_range\r\n self.fs.ftp.getmultiline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 224, in getmultiline\r\n line = self.getline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 206, in getline\r\n line = self.file.readline(self.maxline + 1)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\nsocket.timeout: timed out\r\n```" ]
"2022-02-17T00:18:22Z"
"2022-02-18T14:42:13Z"
"2022-02-18T14:42:13Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3740.diff", "html_url": "https://github.com/huggingface/datasets/pull/3740", "merged_at": "2022-02-18T14:42:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3740" }
This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739. Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes: * Change URL prefix from `ftp://` to `https://` * Explicilty `open` the filename and pass the XML contents to `etree.fromstring(xml_str)` The Github diff tool makes it look like the changes are larger than they are, sorry about that. I tested locally and the `pubmed` dataset now works in both normal and streaming modes. There is some overhead at the start of each shard in streaming mode as building the XML tree online is quite slow (each pubmed .xml.gz file is ~20MB), but the overhead gets amortized over all the samples in the shard. On my laptop with a single CPU worker I am able to stream at about ~600 samples/s.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3740/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3740/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/916/comments
https://api.github.com/repos/huggingface/datasets/issues/916/events
https://github.com/huggingface/datasets/pull/916
753,376,643
MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx
916
Add Swedish NER Corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[]
closed
false
null
[]
null
[ "Yes the use of configs is optional", "@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[More Information Needed]` text" ]
"2020-11-30T10:59:51Z"
"2020-12-02T03:10:50Z"
"2020-12-02T03:10:49Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/916.diff", "html_url": "https://github.com/huggingface/datasets/pull/916", "merged_at": "2020-12-02T03:10:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/916.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/916" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/916/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/916/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4049/comments
https://api.github.com/repos/huggingface/datasets/issues/4049/events
https://github.com/huggingface/datasets/pull/4049
1,183,832,893
PR_kwDODunzps41LSjv
4,049
Create metric card for the Code Eval metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "if possible, give relevant names to your Pull requests @sashavor (make it easier to scan the repo activity) Thanks!", "updating them now! thanks for the feedback @julien-c " ]
"2022-03-28T18:34:23Z"
"2022-03-29T13:38:12Z"
"2022-03-29T13:32:50Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4049.diff", "html_url": "https://github.com/huggingface/datasets/pull/4049", "merged_at": "2022-03-29T13:32:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/4049.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4049" }
Creating initial Code Eval metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4049/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4049/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3654/comments
https://api.github.com/repos/huggingface/datasets/issues/3654/events
https://github.com/huggingface/datasets/pull/3654
1,119,717,475
PR_kwDODunzps4x2kiX
3,654
Better TQDM output
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "@lhoestq I've created a notebook for you to see the difference: https://colab.research.google.com/drive/1by3EqnoKvC2p-yKW4lPDGOFOZHyGVyeQ?usp=sharing.\r\n\r\nFeel free to suggest better descriptions for the progress bars. \r\n\r\nIf everything looks good, think we can merge." ]
"2022-01-31T17:22:43Z"
"2022-02-03T15:55:34Z"
"2022-02-03T15:55:33Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3654.diff", "html_url": "https://github.com/huggingface/datasets/pull/3654", "merged_at": "2022-02-03T15:55:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/3654.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3654" }
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tqdm/commit/f7722edecc3010cb35cc1c923ac4850a76336f82) * adds the missing `drop_last_batch` and `with_ranks` params to `DatasetDict.map` * correctly computes the number of iterations in `map` and the CSV/JSON loader when `batched=True` to fix `tqdm` progress bars * removes the `bool(logging.get_verbosity() == logging.NOTSET)` (or simplifies `bool(logging.get_verbosity() == logging.NOTSET) or not utils.is_progress_bar_enabled()` to `not utils.is_progress_bar_enabled()`) condition and uses `utils.is_progress_bar_enabled` to check if `tqdm` output is enabled (this comment from @stas00 explains why the `bool(logging.get_verbosity() == logging.NOTSET)` check is problematic: https://github.com/huggingface/transformers/issues/14889#issue-1087318463) Fix #2630
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3654/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1660/comments
https://api.github.com/repos/huggingface/datasets/issues/1660/events
https://github.com/huggingface/datasets/pull/1660
775,831,423
MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1
1,660
add dataset info
{ "avatar_url": "https://avatars.githubusercontent.com/u/24206326?v=4", "events_url": "https://api.github.com/users/harshalmittal4/events{/privacy}", "followers_url": "https://api.github.com/users/harshalmittal4/followers", "following_url": "https://api.github.com/users/harshalmittal4/following{/other_user}", "gists_url": "https://api.github.com/users/harshalmittal4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/harshalmittal4", "id": 24206326, "login": "harshalmittal4", "node_id": "MDQ6VXNlcjI0MjA2MzI2", "organizations_url": "https://api.github.com/users/harshalmittal4/orgs", "received_events_url": "https://api.github.com/users/harshalmittal4/received_events", "repos_url": "https://api.github.com/users/harshalmittal4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/harshalmittal4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harshalmittal4/subscriptions", "type": "User", "url": "https://api.github.com/users/harshalmittal4" }
[]
closed
false
null
[]
null
[]
"2020-12-29T10:58:19Z"
"2020-12-30T17:04:30Z"
"2020-12-30T17:04:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1660.diff", "html_url": "https://github.com/huggingface/datasets/pull/1660", "merged_at": "2020-12-30T17:04:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1660" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1660/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3068/comments
https://api.github.com/repos/huggingface/datasets/issues/3068/events
https://github.com/huggingface/datasets/pull/3068
1,024,681,264
PR_kwDODunzps4tHhOC
3,068
feat: increase streaming retry config
{ "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/borisdayma", "id": 715491, "login": "borisdayma", "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "repos_url": "https://api.github.com/users/borisdayma/repos", "site_admin": false, "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "type": "User", "url": "https://api.github.com/users/borisdayma" }
[]
closed
false
null
[]
null
[ "@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much." ]
"2021-10-13T02:00:50Z"
"2021-10-13T09:25:56Z"
"2021-10-13T09:25:54Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3068.diff", "html_url": "https://github.com/huggingface/datasets/pull/3068", "merged_at": "2021-10-13T09:25:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/3068.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3068" }
Increase streaming config parameters: * retry interval set to 5 seconds * max retries set to 20 (so 1mn 40s)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3068/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3068/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5194/comments
https://api.github.com/repos/huggingface/datasets/issues/5194/events
https://github.com/huggingface/datasets/pull/5194
1,434,206,951
PR_kwDODunzps5CHPNY
5,194
Fix docs about dataset_info in YAML
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-03T07:10:23Z"
"2022-11-03T13:31:27Z"
"2022-11-03T13:29:21Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5194.diff", "html_url": "https://github.com/huggingface/datasets/pull/5194", "merged_at": "2022-11-03T13:29:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/5194.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5194" }
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5194/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5194/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/965/comments
https://api.github.com/repos/huggingface/datasets/issues/965/events
https://github.com/huggingface/datasets/pull/965
754,553,169
MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2
965
Add CLINC150 Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4", "events_url": "https://api.github.com/users/sumanthd17/events{/privacy}", "followers_url": "https://api.github.com/users/sumanthd17/followers", "following_url": "https://api.github.com/users/sumanthd17/following{/other_user}", "gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sumanthd17", "id": 28291870, "login": "sumanthd17", "node_id": "MDQ6VXNlcjI4MjkxODcw", "organizations_url": "https://api.github.com/users/sumanthd17/orgs", "received_events_url": "https://api.github.com/users/sumanthd17/received_events", "repos_url": "https://api.github.com/users/sumanthd17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions", "type": "User", "url": "https://api.github.com/users/sumanthd17" }
[]
closed
false
null
[]
null
[]
"2020-12-01T16:43:00Z"
"2020-12-01T16:51:16Z"
"2020-12-01T16:49:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/965.diff", "html_url": "https://github.com/huggingface/datasets/pull/965", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/965.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/965" }
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/965/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/965/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2780/comments
https://api.github.com/repos/huggingface/datasets/issues/2780/events
https://github.com/huggingface/datasets/pull/2780
964,794,764
MDExOlB1bGxSZXF1ZXN0NzA3MTk2NjA3
2,780
VIVOS dataset for Vietnamese ASR
{ "avatar_url": "https://avatars.githubusercontent.com/u/57580923?v=4", "events_url": "https://api.github.com/users/binh234/events{/privacy}", "followers_url": "https://api.github.com/users/binh234/followers", "following_url": "https://api.github.com/users/binh234/following{/other_user}", "gists_url": "https://api.github.com/users/binh234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/binh234", "id": 57580923, "login": "binh234", "node_id": "MDQ6VXNlcjU3NTgwOTIz", "organizations_url": "https://api.github.com/users/binh234/orgs", "received_events_url": "https://api.github.com/users/binh234/received_events", "repos_url": "https://api.github.com/users/binh234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/binh234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binh234/subscriptions", "type": "User", "url": "https://api.github.com/users/binh234" }
[]
closed
false
null
[]
null
[]
"2021-08-10T09:47:36Z"
"2021-08-12T11:09:30Z"
"2021-08-12T11:09:30Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2780.diff", "html_url": "https://github.com/huggingface/datasets/pull/2780", "merged_at": "2021-08-12T11:09:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/2780.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2780" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2780/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2780/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3368/comments
https://api.github.com/repos/huggingface/datasets/issues/3368/events
https://github.com/huggingface/datasets/pull/3368
1,069,403,624
PR_kwDODunzps4vTObo
3,368
Fix dict source_datasets tagset validator
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-12-02T10:52:20Z"
"2021-12-02T15:48:38Z"
"2021-12-02T15:48:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3368.diff", "html_url": "https://github.com/huggingface/datasets/pull/3368", "merged_at": "2021-12-02T15:48:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3368.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3368" }
Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys. This PR: - Extends `tagset_validator` to support regex tags - Uses `tagset_validator` to validate dict `source_datasets`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3368/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3368/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4170/comments
https://api.github.com/repos/huggingface/datasets/issues/4170/events
https://github.com/huggingface/datasets/pull/4170
1,204,413,620
PR_kwDODunzps42O2-L
4,170
to_tf_dataset rewrite
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "[Magic is now banned](https://www.youtube.com/watch?v=WIn58XoY728#t=36s) by decree of @sgugger. This is honestly much cleaner, and the functionality will make much more sense in `transformers` anyway!", "@gante I renamed the default collator to `minimal_tf_collate_fn`!", "@lhoestq @sgugger @gante \r\n\r\nI think this should now be ready, it looks good in testing! I'll try a few more notebooks today and tomorrow to be sure before I merge. Key changes are:\r\n\r\n- No column autodetection magic (will make a separate PR to add this as a `transformers` function)\r\n- Drops non-numerical features automatically (this is more of a 'DataLoader' method, we'll have a separate method to expose 'raw' datasets to `tf.data`)\r\n- Better autodetection of numerical features.\r\n- Shouldn't randomly crash mid-function :skull: \r\n\r\nWe definitely have some questions still to resolve about how to handle making a 'DataLoader' dataset versus a 'raw' dataset - see [the Notion doc](https://www.notion.so/huggingface2/Splitting-to_tf_dataset-c2e0773c4bec484384064b30ed634383) if you're interested. Still, since this PR is just fixes/improvements to an existing method which never supported non-numerical features anyway, we can merge it before we've resolved those issues, and then think about how to name and split things afterwards.", "P.S. I'll take out the region comments at the end before I merge, I promise! They're just helpful while I'm editing it", "+1 for the tests\r\n\r\n> Drops non-numerical features automatically\r\n\r\nCan you give more details on how this work and the rationale as well ? This is not explained in the docs\r\n\r\nAlso why are you adding `error_on_missing` and `auto_fix_label_names ` ? The rationale is not clear to me. In particular I think it is sensible enough to expect users to not ask columns that don't exist, and to rename a label column when required.", "@lhoestq I rewrote those parts - they were causing some other issues too! `error_on_missing` and `auto_fix_label_names` have been removed. The new logic is to simply drop (before batch collation) all columns the user doesn't ask for, but not to raise errors if the user asked for columns not in the dataset, as they may be added by the collator. Hopefully this cleans it up and matches the documentation better!", "@lhoestq New tests are now in!", "Seeing some other random tests failing that don't look to be associated with this PR.", "@lhoestq I can't figure out these test failures! They don't seem related to this PR at all, but I rebased to the latest version and they keep happening, even though they're not visible on master.", "Thanks for the ping, will take a look tomorrow :)\r\n\r\nMaybe the rebase didn't go well for the code recently merged about label alignment from https://github.com/huggingface/datasets/pull/4277 ?", "It's very strange! The rebase looks fine to me. I might try to move my changes to a new branch from `master` and see if I can figure out which change causes this problem to appear.", "@lhoestq Got it! It was caused by a name collision - I was importing `typing.Sequence`, but the code also needed `features.Sequence`. The tests from that PR were expecting the latter but got the former, and then crashed.", "@lhoestq Thanks! Also, when you're ready, don't merge it immediately! I'd like to do a quick round of manual testing with the very final build once you're happy to make sure it still works in our notebooks and examples.", "@lhoestq Tests look good to me, merging now!" ]
"2022-04-14T11:30:58Z"
"2022-06-06T14:31:12Z"
"2022-06-06T14:22:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4170.diff", "html_url": "https://github.com/huggingface/datasets/pull/4170", "merged_at": "2022-06-06T14:22:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4170.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4170" }
This PR rewrites almost all of `to_tf_dataset()`, which makes it kind of hard to list all the changes, but the most critical ones are: - Much better stability and no more dropping unexpected column names (Sorry @NielsRogge) - Doesn't clobber custom transforms on the data (Sorry @NielsRogge again) - Much better handling of the situation when the `collate_fn` adds columns that aren't in the dataset. - Better inference of shapes and data types - Lots of hacky special-casing code removed - Can return string columns (as `tf.String`) - Most arguments have default values, calling the method should be much simpler - ~~Can accept a `model` argument and only return columns that are valid inputs to that model~~ - Drops the `dummy_labels` argument - this was a workaround for Keras issues that have been resolved by changes in `transformers`. Also remove it from tests and the Overview notebook. I still have a couple of TODOs remaining and some testing to do, so don't merge yet, but it should be mostly ready for review at this point!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4170/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4170/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3764/comments
https://api.github.com/repos/huggingface/datasets/issues/3764/events
https://github.com/huggingface/datasets/issues/3764
1,145,107,050
I_kwDODunzps5EQPJq
3,764
!
{ "avatar_url": "https://avatars.githubusercontent.com/u/77545307?v=4", "events_url": "https://api.github.com/users/LesiaFedorenko/events{/privacy}", "followers_url": "https://api.github.com/users/LesiaFedorenko/followers", "following_url": "https://api.github.com/users/LesiaFedorenko/following{/other_user}", "gists_url": "https://api.github.com/users/LesiaFedorenko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LesiaFedorenko", "id": 77545307, "login": "LesiaFedorenko", "node_id": "MDQ6VXNlcjc3NTQ1MzA3", "organizations_url": "https://api.github.com/users/LesiaFedorenko/orgs", "received_events_url": "https://api.github.com/users/LesiaFedorenko/received_events", "repos_url": "https://api.github.com/users/LesiaFedorenko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LesiaFedorenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LesiaFedorenko/subscriptions", "type": "User", "url": "https://api.github.com/users/LesiaFedorenko" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[]
"2022-02-20T19:05:43Z"
"2022-02-21T08:55:58Z"
"2022-02-21T08:55:58Z"
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3764/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3854/comments
https://api.github.com/repos/huggingface/datasets/issues/3854/events
https://github.com/huggingface/datasets/issues/3854
1,162,434,199
I_kwDODunzps5FSVaX
3,854
load only England English dataset from common voice english dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4", "events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}", "followers_url": "https://api.github.com/users/amanjaiswal777/followers", "following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}", "gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amanjaiswal777", "id": 36677001, "login": "amanjaiswal777", "node_id": "MDQ6VXNlcjM2Njc3MDAx", "organizations_url": "https://api.github.com/users/amanjaiswal777/orgs", "received_events_url": "https://api.github.com/users/amanjaiswal777/received_events", "repos_url": "https://api.github.com/users/amanjaiswal777/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions", "type": "User", "url": "https://api.github.com/users/amanjaiswal777" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r\n\r\nFor example, to get their latest Common Voice relase (8.0):\r\n- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0\r\n- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: \"accent\", \"age\",... \r\n- Then you can load their \"en\" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)\r\n ```python\r\n from datasets import load_dataset\r\n ds_en = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True)\r\n ```\r\n- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):\r\n ```python\r\n ds_england_en = ds_en.filter(lambda item: item[\"accent\"] == \"England English\")\r\n ```\r\n\r\nFeel free to reopen this issue if you need further assistance." ]
"2022-03-08T09:40:52Z"
"2022-03-09T08:13:33Z"
"2022-03-09T08:13:33Z"
NONE
null
null
null
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]') testing_data = load_dataset("common_voice", "en", split="test[:200]") I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this? **Typical Voice Accent Proportions:** - 24% United States English - 8% England English - 5% India and South Asia (India, Pakistan, Sri Lanka) - 3% Australian English - 3% Canadian English - 2% Scottish English - 1% Irish English - 1% Southern African (South Africa, Zimbabwe, Namibia) - 1% New Zealand English Can we replicate this for Age as well? **Age proportions of the common voice:-** - 24% 19 - 29 - 14% 30 - 39 - 10% 40 - 49 - 6% < 19 - 4% 50 - 59 - 4% 60 - 69 - 1% 70 – 79
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3854/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5951/comments
https://api.github.com/repos/huggingface/datasets/issues/5951/events
https://github.com/huggingface/datasets/issues/5951
1,756,363,546
I_kwDODunzps5or_sa
5,951
What is the Right way to use discofuse dataset??
{ "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akesh1235", "id": 125154243, "login": "akesh1235", "node_id": "U_kgDOB3Wzww", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "repos_url": "https://api.github.com/users/akesh1235/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "type": "User", "url": "https://api.github.com/users/akesh1235" }
[]
closed
false
null
[]
null
[ "Thanks for opening https://huggingface.co/datasets/discofuse/discussions/3, let's continue the discussion over there if you don't mind", "I have posted there also sir, please check\r\n@lhoestq" ]
"2023-06-14T08:38:39Z"
"2023-06-14T13:25:06Z"
"2023-06-14T12:10:16Z"
NONE
null
null
null
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) **Below is the following way, as per my understanding , Is it correct :question: :question:** The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are: [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) 1. **coherent_first_sentence** 2. **coherent_second_sentence** 3. **incoherent_first_sentence** 4. **incoherent_second_sentence** [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6) The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.** The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion. Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5951/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5951/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
https://api.github.com/repos/huggingface/datasets/issues/2075/events
https://github.com/huggingface/datasets/issues/2075
834,301,246
MDU6SXNzdWU4MzQzMDEyNDY=
2,075
ConnectionError: Couldn't reach common_voice.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LifaSun", "id": 6188893, "login": "LifaSun", "node_id": "MDQ6VXNlcjYxODg4OTM=", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "repos_url": "https://api.github.com/users/LifaSun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "type": "User", "url": "https://api.github.com/users/LifaSun" }
[]
closed
false
null
[]
null
[ "Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?", "@albertvillanova Thanks! It works well now. " ]
"2021-03-18T01:19:06Z"
"2021-03-20T10:29:41Z"
"2021-03-20T10:29:41Z"
NONE
null
null
null
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/236/comments
https://api.github.com/repos/huggingface/datasets/issues/236/events
https://github.com/huggingface/datasets/pull/236
631,099,875
MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4
236
CompGuessWhat?! dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aleSuglia", "id": 1479733, "login": "aleSuglia", "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "repos_url": "https://api.github.com/users/aleSuglia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "type": "User", "url": "https://api.github.com/users/aleSuglia" }
[]
closed
false
null
[]
null
[ "Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-zs-gameplay\").\r\n\r\nMaybe you can refer to this file https://github.com/huggingface/nlp/blob/master/datasets/discofuse/discofuse.py", "@mariamabarham Thanks for your suggestions. I've followed your advice and integrated the additional dataset using another `DatasetConfig` class. It looks like all tests passed. What do you think?", "great @aleSuglia. I requested an additional review from @thomwolf @lhoestq and @patrickvonplaten @jplu . You can merge it after an approval from one of them", "Looks great! Thanks for adding the dummy data :-) ", "Not sure whether it's the most appropriate place but I'll ask another design question. For Vision+Language dataset, is very common to have visual features associated with each example. At the moment, for instance, I'm only integrating the image identifier so that people can later on lookup the image features during training. Do you recommend this approach or do you think it should be done in a different way?\r\n\r\nThank you for your answer!", "Hi @aleSuglia your remark on the visual features is a good point.\r\n\r\nWe haven't started to dive deeply into how CV datasets are usually structured (cc @sgugger)\r\n\r\nDo you have a pointer to how visual features are currently loaded and accessed by people using GuessCompWhat? ", "@thomwolf As far as I know, people using Language+Vision tasks they typically have their reference dataset (either in JSON or JSONL format) and for each example in it they have an identifier that specifies the reference image. Currently, images are represented by either pooling-based visual features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more common and recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. \r\n\r\nFor all these types of features, people use either HD5F or NumPy compressed representations. In my personal projects, I've ditched altogether HD5F because it doesn't have out-of-the-box support for multi-processing (unless you have an ad-hoc installation of it). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it (see [numpy.savez](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)). However, I believe that Apache Arrow would be a really good fit for this type of features. \r\n\r\nLooking forward to hearing your thoughts about it!", "Awesome work on this one thanks :)", "@thomwolf I was thinking that I should create an issue regarding the visual features so that we can keep track of it for future work. I think it would be great to have it in NLP and I'll be happy to contribute. Let me know what you think :) " ]
"2020-06-04T19:45:50Z"
"2020-06-11T09:43:42Z"
"2020-06-11T07:45:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/236.diff", "html_url": "https://github.com/huggingface/datasets/pull/236", "merged_at": "2020-06-11T07:45:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/236.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/236" }
Hello, Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)). This pull-request adds the CompGuessWhat?! splits that have been extracted from the original dataset. This is only part of our evaluation framework because there is also an additional split of the dataset that has a completely different set of games. I didn't integrate it yet because I didn't know what would be the best practice in this case. Let me clarify the scenario. In our paper, we have a main dataset (let's call it `compguesswhat-gameplay`) and a zero-shot dataset (let's call it `compguesswhat-zs-gameplay`). In the current code of the pull-request, I have only integrated `compguesswhat-gameplay`. I was thinking that it would be nice to have the `compguesswhat-zs-gameplay` in the same dataset class by simply specifying some particular option to the `nlp.load_dataset()` factory. For instance: ```python cgw = nlp.load_dataset("compguesswhat") cgw_zs = nlp.load_dataset("compguesswhat", zero_shot=True) ``` The other option would be to have a separate dataset class. Any preferences?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/236/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5121/comments
https://api.github.com/repos/huggingface/datasets/issues/5121/events
https://github.com/huggingface/datasets/pull/5121
1,410,681,067
PR_kwDODunzps5A4gUB
5,121
Bugfix ignore function when creating new_fingerprint for caching
{ "avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4", "events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}", "followers_url": "https://api.github.com/users/Salehbigdeli/followers", "following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}", "gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Salehbigdeli", "id": 34204311, "login": "Salehbigdeli", "node_id": "MDQ6VXNlcjM0MjA0MzEx", "organizations_url": "https://api.github.com/users/Salehbigdeli/orgs", "received_events_url": "https://api.github.com/users/Salehbigdeli/received_events", "repos_url": "https://api.github.com/users/Salehbigdeli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions", "type": "User", "url": "https://api.github.com/users/Salehbigdeli" }
[]
closed
false
null
[]
null
[ "Adding \"function\" to the kwargs to ignore when computing the fingerprint will break `map` caching. Indeed passing two different function would result in two different datasets that have the same fingerprint - and the cache wouldn't be able to distinguish them.\r\n\r\nE.g this code would reload ds1 from the cache insetad of computing the dataset for ds2\r\n```python\r\nds = Dataset.from_dict({\"a\": [1, 2, 3]})\r\nds1 = ds.map(lambda x: {\"b\": 1})\r\nds2 = ds.map(lambda x: {\"b\": 2})\r\n```" ]
"2022-10-17T00:03:43Z"
"2022-10-17T12:39:36Z"
"2022-10-17T12:39:36Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5121.diff", "html_url": "https://github.com/huggingface/datasets/pull/5121", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5121.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5121" }
maybe fixes: #5109
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5121/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4534/comments
https://api.github.com/repos/huggingface/datasets/issues/4534/events
https://github.com/huggingface/datasets/pull/4534
1,277,897,197
PR_kwDODunzps46AFK_
4,534
Add `tldr_news` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze" }
[]
closed
false
null
[]
null
[ "Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent 😃 ", "Thanks, we will update the guide ;)" ]
"2022-06-21T05:02:43Z"
"2022-06-23T14:33:54Z"
"2022-06-21T14:21:11Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4534.diff", "html_url": "https://github.com/huggingface/datasets/pull/4534", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4534.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4534" }
This PR aims at adding support for a news dataset: `tldr news`. This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4534/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4534/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5720/comments
https://api.github.com/repos/huggingface/datasets/issues/5720/events
https://github.com/huggingface/datasets/issues/5720
1,659,610,705
I_kwDODunzps5i66ZR
5,720
Streaming IterableDatasets do not work with torch DataLoaders
{ "avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4", "events_url": "https://api.github.com/users/jlehrer1/events{/privacy}", "followers_url": "https://api.github.com/users/jlehrer1/followers", "following_url": "https://api.github.com/users/jlehrer1/following{/other_user}", "gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jlehrer1", "id": 29244648, "login": "jlehrer1", "node_id": "MDQ6VXNlcjI5MjQ0NjQ4", "organizations_url": "https://api.github.com/users/jlehrer1/orgs", "received_events_url": "https://api.github.com/users/jlehrer1/received_events", "repos_url": "https://api.github.com/users/jlehrer1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions", "type": "User", "url": "https://api.github.com/users/jlehrer1" }
[]
open
false
null
[]
null
[ "Edit: This behavior is true even without `.take/.set`", "I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n# Saving the dataset as a parquet file\r\ndataset = Dataset.from_generator(my_gen)\r\ntrain_path = \"/tmp/test.parquet\"\r\ndataset.to_parquet(train_path)\r\n\r\n# Creating a local dataset from the parquet file\r\ndata_files = {\"train\": [str(train_path)]}\r\nbuilder = load_dataset_builder(\"parquet\", data_files=data_files)\r\nbuilder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n# Loading the dataset from the local directory as streaming\r\ndataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\ndataset.with_format(\"torch\")\r\n\r\ndl = DataLoader(dataset, batch_size=2, num_workers=1)\r\nfor row in dl:\r\n print(row)\r\n```\r\n\r\nMy env info:\r\n```\r\ndatasets 2.11.0\r\ntorch 2.0.0\r\ntorchvision 0.15.1\r\nPython 3.9.16\r\n```\r\n\r\nNote that the example above doesn't fail if the number of workers used is `0`", "I cannot reproduce this error, not even with your MRE @ivanprado (your env appears to be the same as Colab's, and your code runs there without issues). ", "@mariosasko you are right, it works on Colab. I digged deeper and found that the problem arises when the multiprocessing method is set to be `spawn`. This code reproduces the problem in Colab:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\nimport multiprocessing as mp\r\n\r\nmp.set_start_method('spawn')\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n\r\ndef main():\r\n # Saving the dataset as a parquet file\r\n dataset = Dataset.from_generator(my_gen)\r\n train_path = \"/tmp/test.parquet\"\r\n dataset.to_parquet(train_path)\r\n\r\n # Creating a local dataset from the parquet file\r\n data_files = {\"train\": [str(train_path)]}\r\n builder = load_dataset_builder(\"parquet\", data_files=data_files)\r\n builder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n # Loading the dataset from the local directory as streaming\r\n dataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\n dataset.with_format(\"torch\")\r\n\r\n dl = DataLoader(dataset, batch_size=2, num_workers=1)\r\n for row in dl:\r\n print(row)\r\n\r\nmain()\r\n```", "So is there a way to fix this by changing the `mp` method? This is blocking any usage of the `datasets` library for me", "@jlehrer1 can you try adding `mp.set_start_method('fork')` at the beginning of your code? Maybe this helps you. Keep us posted. ", "I have a similar issue: \r\n> mp.set_start_method('fork')\r\n\r\n\r\nDidnt work" ]
"2023-04-08T18:45:48Z"
"2023-05-27T12:57:08Z"
null
NONE
null
null
null
### Describe the bug When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader: ``` File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__ self._iterator = self._get_iterator() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__ w.start() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper' ``` To reproduce, run the code ``` from datasets import load_dataset data = load_dataset(args.dataset_name, split="train", streaming=True) train_len = 5000 val_len = 100 train, val = data.take(train_len), data.skip(train_len).take(val_len) traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text") traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True) ``` Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via ``` from torch.utils.data import Dataset, IterableDataset from torchvision.transforms import Compose, Resize, ToTensor from transformers import AutoTokenizer import requests from PIL import Image class IterableClipDataset(IterableDataset): def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"): self.dataset = dataset self.context_length = context_length self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer self.image_key = image_key self.text_key = text_key def read_image(self, url: str): try: # Try to read the image image = Image.open(requests.get(url, stream=True).raw) except: image = Image.new("RGB", (224, 224), (0, 0, 0)) return image def process_sample(self, image, text): if isinstance(image, str): image = self.read_image(image) if self.image_transform is not None: image = self.image_transform(image) text = self.tokenizer.encode( text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length" ) text = torch.tensor(text, dtype=torch.long) return image, text def __iter__(self): for sample in self.dataset: image, text = sample[self.image_key], sample[self.text_key] yield self.process_sample(image, text) ``` ### Steps to reproduce the bug Steps to reproduce 1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly) 2. Run the code above ### Expected behavior Batched data is produced from the dataloader ### Environment info ``` datasets == 2.9.0 python == 3.9.12 torch == 1.11.0 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5720/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3887/comments
https://api.github.com/repos/huggingface/datasets/issues/3887/events
https://github.com/huggingface/datasets/pull/3887
1,165,380,852
PR_kwDODunzps40PwqT
3,887
ImageFolder improvements
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3887). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T15:34:46Z"
"2022-03-11T15:06:11Z"
"2022-03-11T15:06:11Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3887.diff", "html_url": "https://github.com/huggingface/datasets/pull/3887", "merged_at": "2022-03-11T15:06:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3887" }
This PR adds the following improvements to the `imagefolder` dataset: * skip the extract step for image files (as discussed in https://github.com/huggingface/datasets/pull/2830#discussion_r816817919) * option to drop labels by setting `drop_labels=True` (useful for image pretraining cc @NielsRogge). This is faster than loading a dataset and removing the `label` column because we don't need to iterate over the files to infer class labels.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3887/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3168/comments
https://api.github.com/repos/huggingface/datasets/issues/3168/events
https://github.com/huggingface/datasets/issues/3168
1,036,673,263
I_kwDODunzps49ymDv
3,168
OpenSLR/83 is empty
{ "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tyrius02", "id": 4561309, "login": "tyrius02", "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "repos_url": "https://api.github.com/users/tyrius02/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "type": "User", "url": "https://api.github.com/users/tyrius02" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tyrius02", "id": 4561309, "login": "tyrius02", "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "repos_url": "https://api.github.com/users/tyrius02/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "type": "User", "url": "https://api.github.com/users/tyrius02" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tyrius02", "id": 4561309, "login": "tyrius02", "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "repos_url": "https://api.github.com/users/tyrius02/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "type": "User", "url": "https://api.github.com/users/tyrius02" } ]
null
[ "Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?", "@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.", "Looks like the tests all passed on the PR." ]
"2021-10-26T19:42:21Z"
"2021-10-29T10:04:09Z"
"2021-10-29T10:04:09Z"
CONTRIBUTOR
null
null
null
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 17877 }) }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 0 }) }) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.1.dev0 (master HEAD) - Platform: Ubuntu 20.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3168/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3168/timeline
null
completed
false