url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.47B
1.9B
node_id
stringlengths
18
19
number
int64
5.31k
6.25k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
3
19.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6140/comments
https://api.github.com/repos/huggingface/datasets/issues/6140/events
https://github.com/huggingface/datasets/issues/6140
1,845,384,712
I_kwDODunzps5t_lYI
6,140
Misalignment between file format specified in configs metadata YAML and the inferred builder
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2023-08-10T15:07:34
2023-08-17T20:37:20
2023-08-17T20:37:20
MEMBER
null
null
null
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV): ```yaml configs: - config_name: default data_files: - split: train path: data.csv ``` and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML. See: https://huggingface.co/datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1 CC: @freddyaboulton @polinaeterna
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6140/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6139/comments
https://api.github.com/repos/huggingface/datasets/issues/6139/events
https://github.com/huggingface/datasets/issues/6139
1,844,991,583
I_kwDODunzps5t-FZf
6,139
Offline dataset viewer
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi, thanks for the suggestion. It's not possible at the moment. The viewer is part of the Hub codebase and only works on public datasets. Also, it relies on [Datasets Server](https://github.com/huggingface/datasets-server/), which prepares the data and provides an API to access the rows, size, etc.\r\n\r\nIf you're interested in hosting your data as a private dataset on the Hub, you might want to look at https://github.com/huggingface/datasets-server/issues/39.", "Hi, we are building an offline dataset viewer: https://github.com/Renumics/spotlight\r\nIt supports many HF datasets, but currently you have to use it via Pandas:\r\ndf=ds.to_pandas()\r\nspotlight.show(df)\r\n\r\nWould love to hear from you if that works for your use case. If not, feel free to open an issue on the repo: https://github.com/Renumics/spotlight/issues", "@ssuwelack thank you! I will definitely try it out." ]
2023-08-10T11:30:00
2023-08-26T19:30:40
null
NONE
null
null
null
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset. ### Motivation I want to easily view my dataset even when it is hosted locally. ### Your contribution N.A.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6139/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6138/comments
https://api.github.com/repos/huggingface/datasets/issues/6138/events
https://github.com/huggingface/datasets/pull/6138
1,844,952,496
PR_kwDODunzps5XoH2V
6,138
Ignore CI lint rule violation in Pickler.memoize
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.003890 / 0.011008 (-0.007118) | 0.084044 / 0.038508 (0.045536) | 0.071893 / 0.023109 (0.048784) | 0.346926 / 0.275898 (0.071028) | 0.397487 / 0.323480 (0.074007) | 0.004065 / 0.007986 (-0.003921) | 0.003218 / 0.004328 (-0.001111) | 0.064670 / 0.004250 (0.060420) | 0.052414 / 0.037052 (0.015362) | 0.355413 / 0.258489 (0.096924) | 0.398894 / 0.293841 (0.105053) | 0.030763 / 0.128546 (-0.097783) | 0.008590 / 0.075646 (-0.067056) | 0.286857 / 0.419271 (-0.132415) | 0.051126 / 0.043533 (0.007593) | 0.346125 / 0.255139 (0.090986) | 0.395673 / 0.283200 (0.112474) | 0.025766 / 0.141683 (-0.115917) | 1.466238 / 1.452155 (0.014084) | 1.543117 / 1.492716 (0.050400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213210 / 0.018006 (0.195204) | 0.451981 / 0.000490 (0.451491) | 0.003784 / 0.000200 (0.003585) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027756 / 0.037411 (-0.009655) | 0.082446 / 0.014526 (0.067920) | 0.095414 / 0.176557 (-0.081142) | 0.151812 / 0.737135 (-0.585323) | 0.096296 / 0.296338 (-0.200042) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383729 / 0.215209 (0.168520) | 3.835126 / 2.077655 (1.757471) | 1.891972 / 1.504120 (0.387852) | 1.719934 / 1.541195 (0.178739) | 1.899980 / 1.468490 (0.431490) | 0.488741 / 4.584777 (-4.096036) | 3.634120 / 3.745712 (-0.111592) | 3.243314 / 5.269862 (-2.026547) | 2.028382 / 4.565676 (-2.537294) | 0.057355 / 0.424275 (-0.366920) | 0.007717 / 0.007607 (0.000110) | 0.459835 / 0.226044 (0.233790) | 4.591793 / 2.268929 (2.322864) | 2.346861 / 55.444624 (-53.097764) | 2.067357 / 6.876477 (-4.809120) | 2.254954 / 2.142072 (0.112882) | 0.587016 / 4.805227 (-4.218211) | 0.133918 / 6.500664 (-6.366746) | 0.060311 / 0.075469 (-0.015158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250016 / 1.841788 (-0.591772) | 19.674333 / 8.074308 (11.600025) | 14.522764 / 10.191392 (4.331372) | 0.145741 / 0.680424 (-0.534683) | 0.018593 / 0.534201 (-0.515608) | 0.392833 / 0.579283 (-0.186450) | 0.408194 / 0.434364 (-0.026170) | 0.455164 / 0.540337 (-0.085174) | 0.622722 / 1.386936 (-0.764214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006583 / 0.011353 (-0.004770) | 0.004008 / 0.011008 (-0.007000) | 0.064688 / 0.038508 (0.026180) | 0.074969 / 0.023109 (0.051860) | 0.360504 / 0.275898 (0.084606) | 0.396926 / 0.323480 (0.073446) | 0.005190 / 0.007986 (-0.002796) | 0.003363 / 0.004328 (-0.000966) | 0.064372 / 0.004250 (0.060122) | 0.054428 / 0.037052 (0.017376) | 0.361204 / 0.258489 (0.102715) | 0.400917 / 0.293841 (0.107077) | 0.031117 / 0.128546 (-0.097429) | 0.008406 / 0.075646 (-0.067241) | 0.069655 / 0.419271 (-0.349617) | 0.048582 / 0.043533 (0.005049) | 0.365396 / 0.255139 (0.110257) | 0.381344 / 0.283200 (0.098145) | 0.023809 / 0.141683 (-0.117874) | 1.472926 / 1.452155 (0.020772) | 1.547298 / 1.492716 (0.054582) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276912 / 0.018006 (0.258906) | 0.449096 / 0.000490 (0.448607) | 0.018921 / 0.000200 (0.018721) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030237 / 0.037411 (-0.007174) | 0.088610 / 0.014526 (0.074084) | 0.101529 / 0.176557 (-0.075027) | 0.154070 / 0.737135 (-0.583065) | 0.103471 / 0.296338 (-0.192867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416047 / 0.215209 (0.200838) | 4.152374 / 2.077655 (2.074719) | 2.111181 / 1.504120 (0.607061) | 1.943582 / 1.541195 (0.402387) | 2.031729 / 1.468490 (0.563239) | 0.486740 / 4.584777 (-4.098037) | 3.631547 / 3.745712 (-0.114165) | 3.251202 / 5.269862 (-2.018660) | 2.041272 / 4.565676 (-2.524405) | 0.057287 / 0.424275 (-0.366988) | 0.007303 / 0.007607 (-0.000304) | 0.491027 / 0.226044 (0.264982) | 4.906757 / 2.268929 (2.637829) | 2.581694 / 55.444624 (-52.862931) | 2.250996 / 6.876477 (-4.625481) | 2.441771 / 2.142072 (0.299698) | 0.600714 / 4.805227 (-4.204514) | 0.133233 / 6.500664 (-6.367431) | 0.060856 / 0.075469 (-0.014613) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340062 / 1.841788 (-0.501725) | 19.973899 / 8.074308 (11.899591) | 14.347381 / 10.191392 (4.155989) | 0.166651 / 0.680424 (-0.513773) | 0.018691 / 0.534201 (-0.515510) | 0.393580 / 0.579283 (-0.185703) | 0.409425 / 0.434364 (-0.024939) | 0.474409 / 0.540337 (-0.065929) | 0.649423 / 1.386936 (-0.737514) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5da68102297c3639207a7901952d2765a4cdb8b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006593 / 0.011353 (-0.004760) | 0.004123 / 0.011008 (-0.006885) | 0.084424 / 0.038508 (0.045916) | 0.076867 / 0.023109 (0.053758) | 0.309149 / 0.275898 (0.033251) | 0.348572 / 0.323480 (0.025092) | 0.005463 / 0.007986 (-0.002523) | 0.003440 / 0.004328 (-0.000889) | 0.064604 / 0.004250 (0.060353) | 0.053920 / 0.037052 (0.016868) | 0.345221 / 0.258489 (0.086732) | 0.363209 / 0.293841 (0.069368) | 0.031209 / 0.128546 (-0.097337) | 0.008690 / 0.075646 (-0.066956) | 0.288851 / 0.419271 (-0.130421) | 0.052239 / 0.043533 (0.008707) | 0.308643 / 0.255139 (0.053504) | 0.346407 / 0.283200 (0.063207) | 0.023935 / 0.141683 (-0.117748) | 1.469207 / 1.452155 (0.017052) | 1.532855 / 1.492716 (0.040138) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290885 / 0.018006 (0.272879) | 0.580561 / 0.000490 (0.580071) | 0.004698 / 0.000200 (0.004498) | 0.000286 / 0.000054 (0.000231) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028015 / 0.037411 (-0.009396) | 0.081172 / 0.014526 (0.066646) | 0.096822 / 0.176557 (-0.079735) | 0.151355 / 0.737135 (-0.585781) | 0.098017 / 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384069 / 0.215209 (0.168859) | 3.828635 / 2.077655 (1.750980) | 1.829311 / 1.504120 (0.325192) | 1.672520 / 1.541195 (0.131325) | 1.743944 / 1.468490 (0.275453) | 0.481594 / 4.584777 (-4.103183) | 3.556204 / 3.745712 (-0.189509) | 3.279499 / 5.269862 (-1.990363) | 2.033243 / 4.565676 (-2.532434) | 0.056525 / 0.424275 (-0.367750) | 0.007717 / 0.007607 (0.000109) | 0.466815 / 0.226044 (0.240771) | 4.657022 / 2.268929 (2.388094) | 2.438600 / 55.444624 (-53.006024) | 2.097999 / 6.876477 (-4.778478) | 2.263122 / 2.142072 (0.121049) | 0.636001 / 4.805227 (-4.169226) | 0.147727 / 6.500664 (-6.352937) | 0.059293 / 0.075469 (-0.016176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243111 / 1.841788 (-0.598677) | 19.558379 / 8.074308 (11.484071) | 14.141017 / 10.191392 (3.949625) | 0.169840 / 0.680424 (-0.510583) | 0.017912 / 0.534201 (-0.516289) | 0.391325 / 0.579283 (-0.187958) | 0.417169 / 0.434364 (-0.017195) | 0.457129 / 0.540337 (-0.083209) | 0.629907 / 1.386936 (-0.757029) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006687 / 0.011353 (-0.004666) | 0.004165 / 0.011008 (-0.006844) | 0.064738 / 0.038508 (0.026230) | 0.077286 / 0.023109 (0.054177) | 0.364236 / 0.275898 (0.088338) | 0.393228 / 0.323480 (0.069748) | 0.005451 / 0.007986 (-0.002535) | 0.003547 / 0.004328 (-0.000781) | 0.065761 / 0.004250 (0.061510) | 0.056526 / 0.037052 (0.019474) | 0.365523 / 0.258489 (0.107034) | 0.403331 / 0.293841 (0.109490) | 0.030900 / 0.128546 (-0.097646) | 0.008757 / 0.075646 (-0.066889) | 0.070961 / 0.419271 (-0.348311) | 0.048394 / 0.043533 (0.004861) | 0.365908 / 0.255139 (0.110769) | 0.381197 / 0.283200 (0.097998) | 0.022940 / 0.141683 (-0.118743) | 1.487909 / 1.452155 (0.035754) | 1.532931 / 1.492716 (0.040215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317506 / 0.018006 (0.299500) | 0.513391 / 0.000490 (0.512902) | 0.005464 / 0.000200 (0.005264) | 0.000214 / 0.000054 (0.000159) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032289 / 0.037411 (-0.005122) | 0.090157 / 0.014526 (0.075631) | 0.103514 / 0.176557 (-0.073043) | 0.158236 / 0.737135 (-0.578899) | 0.106554 / 0.296338 (-0.189784) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406455 / 0.215209 (0.191246) | 4.061563 / 2.077655 (1.983908) | 2.082201 / 1.504120 (0.578081) | 1.914433 / 1.541195 (0.373238) | 2.039342 / 1.468490 (0.570852) | 0.478444 / 4.584777 (-4.106333) | 3.599755 / 3.745712 (-0.145957) | 3.294453 / 5.269862 (-1.975409) | 2.028519 / 4.565676 (-2.537158) | 0.056118 / 0.424275 (-0.368157) | 0.007325 / 0.007607 (-0.000282) | 0.493177 / 0.226044 (0.267132) | 4.926218 / 2.268929 (2.657289) | 2.605033 / 55.444624 (-52.839591) | 2.239933 / 6.876477 (-4.636544) | 2.454210 / 2.142072 (0.312137) | 0.571905 / 4.805227 (-4.233322) | 0.133251 / 6.500664 (-6.367413) | 0.062422 / 0.075469 (-0.013047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352752 / 1.841788 (-0.489036) | 20.265109 / 8.074308 (12.190801) | 14.293064 / 10.191392 (4.101672) | 0.169267 / 0.680424 (-0.511157) | 0.018607 / 0.534201 (-0.515594) | 0.393655 / 0.579283 (-0.185628) | 0.402132 / 0.434364 (-0.032232) | 0.477566 / 0.540337 (-0.062772) | 0.651773 / 1.386936 (-0.735163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#80023f36b2b6678347979421ef973d8969d31306 \"CML watermark\")\n" ]
2023-08-10T11:03:15
2023-08-10T11:31:45
2023-08-10T11:22:56
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6138", "html_url": "https://github.com/huggingface/datasets/pull/6138", "diff_url": "https://github.com/huggingface/datasets/pull/6138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6138.patch", "merged_at": "2023-08-10T11:22:56" }
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`. The lint rule violation was introduced in this PR: - #3182 @lhoestq is there a reason you did not use `isinstance` instead? As a hotfix, we just ignore the violation of the lint rule. Fix #6136.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6138/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6137/comments
https://api.github.com/repos/huggingface/datasets/issues/6137/events
https://github.com/huggingface/datasets/issues/6137
1,844,952,312
I_kwDODunzps5t97z4
6,137
(`from_spark()`) Unable to connect HDFS in pyspark YARN setting
{ "login": "kyoungrok0517", "id": 1051900, "node_id": "MDQ6VXNlcjEwNTE5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoungrok0517", "html_url": "https://github.com/kyoungrok0517", "followers_url": "https://api.github.com/users/kyoungrok0517/followers", "following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}", "gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions", "organizations_url": "https://api.github.com/users/kyoungrok0517/orgs", "repos_url": "https://api.github.com/users/kyoungrok0517/repos", "events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoungrok0517/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T11:03:08
2023-08-10T11:03:08
null
NONE
null
null
null
### Describe the bug related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613 --- Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library that relies on pyarrow to communicate with HDFS. The `from_spark()` ([link](https://huggingface.co/docs/datasets/use_with_spark#load-from-spark)) is what I'm invoking in my script. Below is the error I'm encountering. Note that I've masked sensitive paths. My code is sent to worker containers (docker) from driver container then executed. I confirmed that in both driver and worker images I can connect to HDFS using pyarrow since the envs and required jars are properly set, but strangely that becomes impossible when the same image runs as remote worker process. These are some peculiarities in my environment that might caused this issue. * **Cluster requires kerberos authentication** * But I think the error message implies that's not the problem in this case * **The user that runs the worker process is different from that built the docker image** * To avoid permission-related issues I made all directories that are accessed from the script accessible to everyone * **Pyspark-part of my code has no problem interacting with HDFS.** * Even pyarrow doesn't experience problem when I run the code in interactive session of the same docker images (driver, worker) * The problem occurs only when it runs as cluster's worker runtime Hope I could get some help. Thanks. ```bash 2023-08-08 18:51:19,638 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-08-08 18:51:20,280 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 23/08/08 18:51:22 WARN TaskSetManager: Lost task 0.0 in stage 142.0 (TID 9732) (ac3bax2062.bdp.bdata.ai executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:24 WARN TaskSetManager: Lost task 0.1 in stage 142.0 (TID 9733) (ac3iax2079.bdp.bdata.ai executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:38 WARN TaskSetManager: Lost task 0.2 in stage 142.0 (TID 9734) (<MASKED> executor 4): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` ### Steps to reproduce the bug Use `from_spark()` function in pyspark YARN setting. I set `cache_dir` to HDFS path. ### Expected behavior Work as described in document ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6137/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6136/comments
https://api.github.com/repos/huggingface/datasets/issues/6136/events
https://github.com/huggingface/datasets/issues/6136
1,844,887,866
I_kwDODunzps5t9sE6
6,136
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2023-08-10T10:19:50
2023-08-10T11:22:58
2023-08-10T11:22:58
MEMBER
null
null
null
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error: ``` src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6136/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6135/comments
https://api.github.com/repos/huggingface/datasets/issues/6135/events
https://github.com/huggingface/datasets/pull/6135
1,844,870,943
PR_kwDODunzps5Xn2AT
6,135
Remove unused allowed_extensions param
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009055 / 0.011353 (-0.002298) | 0.008835 / 0.011008 (-0.002173) | 0.117048 / 0.038508 (0.078540) | 0.096268 / 0.023109 (0.073159) | 0.474678 / 0.275898 (0.198780) | 0.550509 / 0.323480 (0.227029) | 0.005552 / 0.007986 (-0.002434) | 0.004315 / 0.004328 (-0.000013) | 0.094336 / 0.004250 (0.090086) | 0.061945 / 0.037052 (0.024892) | 0.461422 / 0.258489 (0.202933) | 0.521271 / 0.293841 (0.227430) | 0.049116 / 0.128546 (-0.079430) | 0.015007 / 0.075646 (-0.060639) | 0.414351 / 0.419271 (-0.004920) | 0.137520 / 0.043533 (0.093987) | 0.465627 / 0.255139 (0.210488) | 0.537244 / 0.283200 (0.254044) | 0.068577 / 0.141683 (-0.073106) | 1.921373 / 1.452155 (0.469219) | 2.506653 / 1.492716 (1.013937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273970 / 0.018006 (0.255963) | 0.750295 / 0.000490 (0.749805) | 0.004241 / 0.000200 (0.004041) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033793 / 0.037411 (-0.003618) | 0.105562 / 0.014526 (0.091037) | 0.131771 / 0.176557 (-0.044786) | 0.196890 / 0.737135 (-0.540245) | 0.119842 / 0.296338 (-0.176496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634881 / 0.215209 (0.419672) | 6.069221 / 2.077655 (3.991566) | 2.678765 / 1.504120 (1.174646) | 2.460309 / 1.541195 (0.919114) | 2.517579 / 1.468490 (1.049089) | 0.869558 / 4.584777 (-3.715219) | 5.407686 / 3.745712 (1.661974) | 4.920687 / 5.269862 (-0.349175) | 3.130066 / 4.565676 (-1.435611) | 0.100337 / 0.424275 (-0.323938) | 0.009615 / 0.007607 (0.002008) | 0.745275 / 0.226044 (0.519231) | 7.577890 / 2.268929 (5.308962) | 3.607887 / 55.444624 (-51.836738) | 2.922211 / 6.876477 (-3.954266) | 3.205592 / 2.142072 (1.063519) | 1.052298 / 4.805227 (-3.752929) | 0.218798 / 6.500664 (-6.281866) | 0.082137 / 0.075469 (0.006667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696551 / 1.841788 (-0.145237) | 24.946074 / 8.074308 (16.871766) | 23.114202 / 10.191392 (12.922810) | 0.220498 / 0.680424 (-0.459925) | 0.029388 / 0.534201 (-0.504813) | 0.494721 / 0.579283 (-0.084562) | 0.603085 / 0.434364 (0.168722) | 0.573093 / 0.540337 (0.032756) | 0.784937 / 1.386936 (-0.601999) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009642 / 0.011353 (-0.001711) | 0.007551 / 0.011008 (-0.003457) | 0.085224 / 0.038508 (0.046716) | 0.099493 / 0.023109 (0.076384) | 0.503824 / 0.275898 (0.227926) | 0.546583 / 0.323480 (0.223103) | 0.006385 / 0.007986 (-0.001601) | 0.004751 / 0.004328 (0.000423) | 0.084699 / 0.004250 (0.080449) | 0.067875 / 0.037052 (0.030823) | 0.485313 / 0.258489 (0.226824) | 0.535808 / 0.293841 (0.241967) | 0.049935 / 0.128546 (-0.078611) | 0.014427 / 0.075646 (-0.061219) | 0.095531 / 0.419271 (-0.323741) | 0.068487 / 0.043533 (0.024954) | 0.502204 / 0.255139 (0.247065) | 0.514393 / 0.283200 (0.231193) | 0.037350 / 0.141683 (-0.104333) | 1.849380 / 1.452155 (0.397226) | 1.920151 / 1.492716 (0.427434) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298363 / 0.018006 (0.280357) | 0.651555 / 0.000490 (0.651065) | 0.005910 / 0.000200 (0.005710) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039170 / 0.037411 (0.001758) | 0.106436 / 0.014526 (0.091910) | 0.129880 / 0.176557 (-0.046677) | 0.185401 / 0.737135 (-0.551734) | 0.125732 / 0.296338 (-0.170607) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643248 / 0.215209 (0.428039) | 6.374807 / 2.077655 (4.297152) | 3.057296 / 1.504120 (1.553176) | 2.779534 / 1.541195 (1.238340) | 2.790165 / 1.468490 (1.321675) | 0.841580 / 4.584777 (-3.743197) | 5.371478 / 3.745712 (1.625766) | 4.973251 / 5.269862 (-0.296610) | 3.235817 / 4.565676 (-1.329860) | 0.097276 / 0.424275 (-0.326999) | 0.008840 / 0.007607 (0.001233) | 0.728678 / 0.226044 (0.502634) | 7.526382 / 2.268929 (5.257454) | 3.792550 / 55.444624 (-51.652074) | 3.439134 / 6.876477 (-3.437342) | 3.466626 / 2.142072 (1.324553) | 1.035894 / 4.805227 (-3.769333) | 0.211670 / 6.500664 (-6.288994) | 0.087596 / 0.075469 (0.012127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.782755 / 1.841788 (-0.059033) | 25.704407 / 8.074308 (17.630099) | 23.799672 / 10.191392 (13.608280) | 0.233952 / 0.680424 (-0.446472) | 0.030810 / 0.534201 (-0.503391) | 0.505857 / 0.579283 (-0.073426) | 0.629331 / 0.434364 (0.194967) | 0.608530 / 0.540337 (0.068192) | 0.813688 / 1.386936 (-0.573248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed4d6bb5f1331576c41b04acd9872a5349a0915c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006401 / 0.011353 (-0.004952) | 0.003916 / 0.011008 (-0.007092) | 0.083976 / 0.038508 (0.045468) | 0.072583 / 0.023109 (0.049474) | 0.322747 / 0.275898 (0.046849) | 0.345159 / 0.323480 (0.021679) | 0.005366 / 0.007986 (-0.002620) | 0.003399 / 0.004328 (-0.000930) | 0.064232 / 0.004250 (0.059982) | 0.053313 / 0.037052 (0.016261) | 0.353127 / 0.258489 (0.094638) | 0.361398 / 0.293841 (0.067557) | 0.030604 / 0.128546 (-0.097942) | 0.008615 / 0.075646 (-0.067031) | 0.285806 / 0.419271 (-0.133466) | 0.050887 / 0.043533 (0.007354) | 0.312293 / 0.255139 (0.057154) | 0.349716 / 0.283200 (0.066516) | 0.024546 / 0.141683 (-0.117137) | 1.472318 / 1.452155 (0.020163) | 1.536063 / 1.492716 (0.043347) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280012 / 0.018006 (0.262006) | 0.593574 / 0.000490 (0.593085) | 0.004083 / 0.000200 (0.003883) | 0.000195 / 0.000054 (0.000141) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027715 / 0.037411 (-0.009696) | 0.081392 / 0.014526 (0.066866) | 0.096445 / 0.176557 (-0.080112) | 0.152131 / 0.737135 (-0.585004) | 0.094825 / 0.296338 (-0.201514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.380749 / 0.215209 (0.165540) | 3.806994 / 2.077655 (1.729339) | 1.842544 / 1.504120 (0.338424) | 1.682829 / 1.541195 (0.141635) | 1.701679 / 1.468490 (0.233189) | 0.484830 / 4.584777 (-4.099947) | 3.517359 / 3.745712 (-0.228353) | 3.231211 / 5.269862 (-2.038651) | 2.029371 / 4.565676 (-2.536306) | 0.057199 / 0.424275 (-0.367077) | 0.007653 / 0.007607 (0.000046) | 0.458572 / 0.226044 (0.232528) | 4.579835 / 2.268929 (2.310907) | 2.326467 / 55.444624 (-53.118157) | 1.939646 / 6.876477 (-4.936831) | 2.133150 / 2.142072 (-0.008922) | 0.596251 / 4.805227 (-4.208976) | 0.131979 / 6.500664 (-6.368686) | 0.059226 / 0.075469 (-0.016243) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234833 / 1.841788 (-0.606955) | 19.475522 / 8.074308 (11.401214) | 14.102760 / 10.191392 (3.911368) | 0.159657 / 0.680424 (-0.520767) | 0.018292 / 0.534201 (-0.515909) | 0.391079 / 0.579283 (-0.188204) | 0.406736 / 0.434364 (-0.027628) | 0.459159 / 0.540337 (-0.081178) | 0.618159 / 1.386936 (-0.768777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.004052 / 0.011008 (-0.006957) | 0.064536 / 0.038508 (0.026028) | 0.075051 / 0.023109 (0.051942) | 0.379596 / 0.275898 (0.103698) | 0.412413 / 0.323480 (0.088933) | 0.005377 / 0.007986 (-0.002608) | 0.003466 / 0.004328 (-0.000863) | 0.064958 / 0.004250 (0.060708) | 0.055265 / 0.037052 (0.018213) | 0.391505 / 0.258489 (0.133016) | 0.425345 / 0.293841 (0.131504) | 0.030750 / 0.128546 (-0.097796) | 0.008652 / 0.075646 (-0.066994) | 0.072107 / 0.419271 (-0.347165) | 0.048340 / 0.043533 (0.004807) | 0.387714 / 0.255139 (0.132575) | 0.402602 / 0.283200 (0.119402) | 0.023492 / 0.141683 (-0.118191) | 1.528377 / 1.452155 (0.076222) | 1.574827 / 1.492716 (0.082110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316999 / 0.018006 (0.298993) | 0.528391 / 0.000490 (0.527901) | 0.005183 / 0.000200 (0.004983) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029670 / 0.037411 (-0.007741) | 0.087130 / 0.014526 (0.072604) | 0.099897 / 0.176557 (-0.076660) | 0.154074 / 0.737135 (-0.583062) | 0.104309 / 0.296338 (-0.192030) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408804 / 0.215209 (0.193595) | 4.072248 / 2.077655 (1.994593) | 2.103333 / 1.504120 (0.599213) | 1.931972 / 1.541195 (0.390777) | 1.980132 / 1.468490 (0.511642) | 0.482623 / 4.584777 (-4.102154) | 3.532789 / 3.745712 (-0.212923) | 3.304962 / 5.269862 (-1.964899) | 2.036672 / 4.565676 (-2.529004) | 0.056944 / 0.424275 (-0.367331) | 0.007190 / 0.007607 (-0.000417) | 0.490650 / 0.226044 (0.264606) | 4.903604 / 2.268929 (2.634675) | 2.586247 / 55.444624 (-52.858377) | 2.227631 / 6.876477 (-4.648846) | 2.397286 / 2.142072 (0.255214) | 0.579167 / 4.805227 (-4.226060) | 0.132037 / 6.500664 (-6.368627) | 0.059971 / 0.075469 (-0.015498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336430 / 1.841788 (-0.505358) | 19.915846 / 8.074308 (11.841538) | 14.102781 / 10.191392 (3.911389) | 0.147956 / 0.680424 (-0.532468) | 0.018192 / 0.534201 (-0.516009) | 0.397949 / 0.579283 (-0.181334) | 0.408529 / 0.434364 (-0.025835) | 0.479382 / 0.540337 (-0.060955) | 0.659735 / 1.386936 (-0.727201) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98074122449bc031f7269f298f1c55f20e39b975 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005880 / 0.011353 (-0.005473) | 0.003677 / 0.011008 (-0.007332) | 0.080022 / 0.038508 (0.041514) | 0.055554 / 0.023109 (0.032445) | 0.397449 / 0.275898 (0.121551) | 0.428346 / 0.323480 (0.104867) | 0.004613 / 0.007986 (-0.003373) | 0.002873 / 0.004328 (-0.001455) | 0.062226 / 0.004250 (0.057976) | 0.044721 / 0.037052 (0.007669) | 0.404792 / 0.258489 (0.146303) | 0.437467 / 0.293841 (0.143626) | 0.027166 / 0.128546 (-0.101381) | 0.008077 / 0.075646 (-0.067569) | 0.260469 / 0.419271 (-0.158803) | 0.043551 / 0.043533 (0.000018) | 0.401712 / 0.255139 (0.146573) | 0.427294 / 0.283200 (0.144094) | 0.021243 / 0.141683 (-0.120440) | 1.464553 / 1.452155 (0.012398) | 1.507112 / 1.492716 (0.014396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198415 / 0.018006 (0.180408) | 0.427940 / 0.000490 (0.427450) | 0.004236 / 0.000200 (0.004036) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023759 / 0.037411 (-0.013652) | 0.073262 / 0.014526 (0.058736) | 0.677113 / 0.176557 (0.500557) | 0.194964 / 0.737135 (-0.542172) | 0.086121 / 0.296338 (-0.210217) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401176 / 0.215209 (0.185967) | 4.028688 / 2.077655 (1.951034) | 2.026804 / 1.504120 (0.522685) | 1.887964 / 1.541195 (0.346770) | 2.008991 / 1.468490 (0.540501) | 0.498847 / 4.584777 (-4.085930) | 3.015920 / 3.745712 (-0.729792) | 2.837019 / 5.269862 (-2.432843) | 1.849976 / 4.565676 (-2.715701) | 0.057545 / 0.424275 (-0.366730) | 0.006645 / 0.007607 (-0.000962) | 0.470225 / 0.226044 (0.244180) | 4.720910 / 2.268929 (2.451982) | 2.473693 / 55.444624 (-52.970931) | 2.177525 / 6.876477 (-4.698952) | 2.374702 / 2.142072 (0.232630) | 0.588253 / 4.805227 (-4.216974) | 0.125512 / 6.500664 (-6.375152) | 0.061247 / 0.075469 (-0.014222) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255829 / 1.841788 (-0.585959) | 18.251689 / 8.074308 (10.177381) | 13.690373 / 10.191392 (3.498981) | 0.146928 / 0.680424 (-0.533496) | 0.016534 / 0.534201 (-0.517667) | 0.335249 / 0.579283 (-0.244034) | 0.338940 / 0.434364 (-0.095424) | 0.382170 / 0.540337 (-0.158168) | 0.529570 / 1.386936 (-0.857366) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.003557 / 0.011008 (-0.007451) | 0.062776 / 0.038508 (0.024267) | 0.058473 / 0.023109 (0.035364) | 0.358780 / 0.275898 (0.082882) | 0.394161 / 0.323480 (0.070682) | 0.004636 / 0.007986 (-0.003349) | 0.002865 / 0.004328 (-0.001463) | 0.062033 / 0.004250 (0.057782) | 0.047154 / 0.037052 (0.010101) | 0.367718 / 0.258489 (0.109229) | 0.400814 / 0.293841 (0.106973) | 0.026919 / 0.128546 (-0.101628) | 0.008071 / 0.075646 (-0.067575) | 0.067802 / 0.419271 (-0.351469) | 0.040894 / 0.043533 (-0.002638) | 0.358757 / 0.255139 (0.103618) | 0.384971 / 0.283200 (0.101771) | 0.020019 / 0.141683 (-0.121664) | 1.458578 / 1.452155 (0.006423) | 1.525059 / 1.492716 (0.032342) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207795 / 0.018006 (0.189789) | 0.413201 / 0.000490 (0.412712) | 0.005199 / 0.000200 (0.004999) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025716 / 0.037411 (-0.011696) | 0.078434 / 0.014526 (0.063908) | 0.086920 / 0.176557 (-0.089637) | 0.138327 / 0.737135 (-0.598808) | 0.088120 / 0.296338 (-0.208219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434344 / 0.215209 (0.219135) | 4.343114 / 2.077655 (2.265459) | 2.384439 / 1.504120 (0.880319) | 2.253929 / 1.541195 (0.712735) | 2.306811 / 1.468490 (0.838321) | 0.497572 / 4.584777 (-4.087205) | 3.028794 / 3.745712 (-0.716919) | 2.833484 / 5.269862 (-2.436377) | 1.878918 / 4.565676 (-2.686759) | 0.057133 / 0.424275 (-0.367143) | 0.006357 / 0.007607 (-0.001251) | 0.508019 / 0.226044 (0.281975) | 5.076935 / 2.268929 (2.808007) | 2.745784 / 55.444624 (-52.698841) | 2.476291 / 6.876477 (-4.400186) | 2.677264 / 2.142072 (0.535191) | 0.587173 / 4.805227 (-4.218054) | 0.126373 / 6.500664 (-6.374291) | 0.062815 / 0.075469 (-0.012654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.355482 / 1.841788 (-0.486305) | 18.818227 / 8.074308 (10.743919) | 13.954289 / 10.191392 (3.762896) | 0.143413 / 0.680424 (-0.537011) | 0.016844 / 0.534201 (-0.517357) | 0.338334 / 0.579283 (-0.240949) | 0.344559 / 0.434364 (-0.089805) | 0.400669 / 0.540337 (-0.139669) | 0.563835 / 1.386936 (-0.823101) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c02a44715c036b5261686669727394b1308a3a4b \"CML watermark\")\n" ]
2023-08-10T10:09:54
2023-08-10T12:08:38
2023-08-10T12:00:02
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6135", "html_url": "https://github.com/huggingface/datasets/pull/6135", "diff_url": "https://github.com/huggingface/datasets/pull/6135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6135.patch", "merged_at": "2023-08-10T12:00:01" }
This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6135/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6134/comments
https://api.github.com/repos/huggingface/datasets/issues/6134/events
https://github.com/huggingface/datasets/issues/6134
1,844,535,142
I_kwDODunzps5t8V9m
6,134
`datasets` cannot be installed alongside `apache-beam`
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I noticed that this is actually covered by issue #5613, which for some reason I didn't see when I searched the issues in this repo the first time." ]
2023-08-10T06:54:32
2023-09-01T03:19:49
2023-08-10T15:22:10
NONE
null
null
null
### Describe the bug If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something such as importing the `load_dataset` method from `datasets` results in a crashing error. I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions. ### Steps to reproduce the bug See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug. In some environments, I have been able to reproduce the bug by running the following in Bash: ```bash $ pip install datasets apache-beam ``` then the following in a Python shell: ```python from datasets import load_dataset ``` Here is my stacktrace from running on Google Colab: <details> <summary>stacktrace</summary> ``` [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.14.4" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 64 65 from . import config ---> 66 from .arrow_reader import ArrowReader 67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 68 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 28 import pyarrow.parquet as pq 29 ---> 30 from .download.download_config import DownloadConfig 31 from .naming import _split_re, filenames_for_dataset_split 32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables [/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module> 7 8 from .download_config import DownloadConfig ----> 9 from .download_manager import DownloadManager, DownloadMode 10 from .streaming_download_manager import StreamingDownloadManager [/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module> 33 from ..utils.info_utils import get_size_checksum_dict 34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm ---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str 36 from .download_config import DownloadConfig 37 [/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module> 38 import dill 39 import multiprocess ---> 40 import multiprocess.pool 41 import numpy as np 42 from packaging import version [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module> 607 # 608 --> 609 class ThreadPool(Pool): 610 611 from .dummy import Process [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool() 609 class ThreadPool(Pool): 610 --> 611 from .dummy import Process 612 613 def __init__(self, processes=None, initializer=None, initargs=()): [/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module> 85 # 86 ---> 87 class Condition(threading._Condition): 88 # XXX 89 if sys.version_info < (3, 0): AttributeError: module 'threading' has no attribute '_Condition' ``` </details> I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes pip to hang indefinitely. ### Expected behavior I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`. ### Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6134/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6133/comments
https://api.github.com/repos/huggingface/datasets/issues/6133/events
https://github.com/huggingface/datasets/issues/6133
1,844,511,519
I_kwDODunzps5t8QMf
6,133
Dataset is slower after calling `to_iterable_dataset`
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq ", "It's roughly the same code between the two so we can expected roughly the same speed, could you share a benchmark ?" ]
2023-08-10T06:36:23
2023-08-16T09:18:54
null
CONTRIBUTOR
null
null
null
### Describe the bug Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset` ### Steps to reproduce the bug Any dataset after converting to `IterableDataset` ### Expected behavior Maybe it should be faster on big dataset? I only test on small dataset ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6133/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6132/comments
https://api.github.com/repos/huggingface/datasets/issues/6132/events
https://github.com/huggingface/datasets/issues/6132
1,843,491,020
I_kwDODunzps5t4XDM
6,132
to_iterable_dataset is missing in document
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fixed with PR" ]
2023-08-09T15:15:03
2023-08-16T04:43:36
2023-08-16T04:43:29
CONTRIBUTOR
null
null
null
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6132/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6130/comments
https://api.github.com/repos/huggingface/datasets/issues/6130/events
https://github.com/huggingface/datasets/issues/6130
1,843,158,846
I_kwDODunzps5t3F8-
6,130
default config name doesn't work when config kwargs are specified.
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq ", "What should be the behavior in this case ? Should it override the default config with the added parameter ?", "I know why it should be treated as a new config if overriding parameters are passed. But in some case, I just pass in some common fields like `data_dir`.\r\n\r\nFor example, I want to extend the FolderBasedBuilder as a multi-config version, the `data_dir` or `data_files` are always passed by user and should not be considered as overriding the default config. In current state, I cannot leverage the feature of default config since passing `data_dir` will disable the default config.", "Thinking more about it I think the current behavior is the right one.\r\n\r\nProvided parameters should be passed to instantiate a new BuilderConfig.\r\n\r\nWhat's the error you're getting ?", "For example, this works to use default config with name '_all_':\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\")\r\n```\r\nwhile this failed to use default config\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", data_dir='.')\r\n```\r\nAfter manually specifying it, it works again.\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", \"_all_\", split=\"train\", data_dir='.')\r\n```", "@lhoestq ", "It should work if you explicitly ask for the config you want to override\r\n\r\n```python\r\nload_dataset('/dataset/with/multiple/config', 'name_of_the_default_config', some_field_in_config='some')\r\n```\r\n\r\nAlternatively you can have a BuilderConfig class that when instantiated returns a config with the right default values. In this case this code would instantiate this config with the default values except for the parameter to override:\r\n\r\n```python\r\nload_dataset('/dataset/with/multiple/config', some_field_in_config='some')\r\n```", "@lhoestq Yes. But it doesn't work for me.\r\n\r\nHere's my dataset for example.\r\n```\r\nlass MyDatasetConfig(datasets.BuilderConfig):\r\n def __init__(self, name: str, version: str, **kwargs):\r\n self.option1 = kwargs.pop(\"option1\", False)\r\n self.option2 = kwargs.pop(\"option2\", 5)\r\n\r\n super().__init__(\r\n name=name,\r\n version=datasets.Version(version),\r\n **kwargs)\r\n\r\n\r\nclass MyDataset(datasets.GeneratorBasedBuilder):\r\n DEFAULT_CONFIG_NAME = \"v1\"\r\n\r\n BUILDER_CONFIGS = [\r\n UnifiedTtsDatasetConfig(\r\n name=\"v1\",\r\n version=\"1.0.0\",\r\n description=\"Initial version of the dataset\"\r\n ),\r\n ]\r\n\r\n def _info(self) -> DatasetInfo:\r\n _ = self.option1\r\n ....\r\n```\r\n\r\nHere it's okay to use `load_dataset('my_dataset.py')` for loading the default config `v1`.\r\n\r\nBut if I want to override the default values in config with `load_dataset('my_dataset.py', option2=3)`, it failed to find my default config `v1.\r\n\r\nUnless I use `load_dataset('my_dataset.py', 'v1', option2=3)`\r\n\r\nSo according to your advice, how can I modify my dataset to be able to override default config without manually specifying it.", "What's the error ? It should try to instantiate `MyDatasetConfig` with `option2=3`", "@lhoestq The error is\r\n```\r\ndef _info(self) -> DatasetInfo:\r\n _ = self.option1 <-\r\n ....\r\nAttributeError: 'BuilderConfig' object has no attribute 'option1'\r\n```\r\nwhich seems to find another unknown config.\r\n\r\nYou can try this line `datasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", data_dir='.')`, it's a multi-config dataset on HF hub and the error is the same.\r\n\r\nMy insights:\r\nhttps://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518\r\nif `config_kwargs` is provided here, the if branch is skipped.", "I see, you just have to set this class attribute to your builder class :)\r\n\r\n```python\r\nBUILDER_CONFIG_CLASS = MyDatasetConfig\r\n```", "So what does this attribute do? In most cases it's not used and the [documents for multi-config dataset](https://huggingface.co/docs/datasets/main/en/image_dataset#multiple-configurations) never mentioned that.", "It tells which builder config class to instantiate if additional config parameters are passed to load_dataset", "@lhoestq maybe we can enhance the document to say something about the common attributes of `DatasetBuilder`", "Ah indeed it's missing in the docs, thanks for reporting. I'm opening a PR" ]
2023-08-09T12:43:15
2023-08-22T10:03:41
null
CONTRIBUTOR
null
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522 If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs. ### Steps to reproduce the bug ```python import datasets datasets.load_dataset('/dataset/with/multiple/config'') # Ok datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err ``` ### Expected behavior Default config behavior should be consistent. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6130/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
https://api.github.com/repos/huggingface/datasets/issues/6129/events
https://github.com/huggingface/datasets/pull/6129
1,841,563,517
PR_kwDODunzps5Xcmuw
6,129
Release 2.14.4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887bef1217e0f4441d57bf0f4d1e806df12f2c50 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n" ]
2023-08-08T15:43:56
2023-08-08T16:08:22
2023-08-08T15:49:06
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6129", "html_url": "https://github.com/huggingface/datasets/pull/6129", "diff_url": "https://github.com/huggingface/datasets/pull/6129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6129.patch", "merged_at": "2023-08-08T15:49:06" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6128/comments
https://api.github.com/repos/huggingface/datasets/issues/6128/events
https://github.com/huggingface/datasets/issues/6128
1,841,545,493
I_kwDODunzps5tw8EV
6,128
IndexError: Invalid key: 88 is out of bounds for size 0
{ "login": "TomasAndersonFang", "id": 38727343, "node_id": "MDQ6VXNlcjM4NzI3MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomasAndersonFang", "html_url": "https://github.com/TomasAndersonFang", "followers_url": "https://api.github.com/users/TomasAndersonFang/followers", "following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}", "gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions", "organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs", "repos_url": "https://api.github.com/users/TomasAndersonFang/repos", "events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}", "received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile", "> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 324, in _compile\r\n out_code = transform_code_object(code, transform)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py\", line 445, in transform_code_object\r\n transformations(instructions, code_options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 311, in transform\r\n tracer.run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1726, in run\r\n super().run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 576, in run\r\n and self.step()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 540, in step\r\n getattr(self, inst.opname)(inst)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1030, in LOAD_ATTR\r\n result = BuiltinVariable(getattr).call_function(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 566, in call_function\r\n result = handler(tx, *args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 931, in call_getattr\r\n return obj.var_getattr(tx, name).add_options(options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py\", line 124, in var_getattr\r\n subobj = inspect.getattr_static(base, name)\r\n File \"/apps/Arch/software/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/inspect.py\", line 1777, in getattr_static\r\n raise AttributeError(attr)\r\nAttributeError: config\r\n\r\nfrom user code:\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/peft/peft_model.py\", line 909, in forward\r\n if self.base_model.config.model_type == \"mpt\":\r\n\r\nSet torch._dynamo.config.verbose=True for more information\r\n\r\n\r\nYou can suppress this exception and fall back to eager by setting:\r\n torch._dynamo.config.suppress_errors = True\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 228, in <module>\r\n main()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 221, in main\r\n trainer.train()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n return inner_training_loop(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2654, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2679, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 82, in forward\r\n return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 209, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 581, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 569, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 337, in catch_errors\r\n return callback(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 404, in _convert_frame\r\n result = inner_convert(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 104, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 262, in _convert_frame_assert\r\n return _compile(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/utils.py\", line 163, in time_wrapper\r\n r = func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 394, in _compile\r\n raise InternalTorchDynamoError() from e\r\ntorch._dynamo.exc.InternalTorchDynamoError\r\n```", "Hi @TomasAndersonFang,\r\n\r\nI guess in this case it may be an issue with `transformers` (or `PyTorch`). I would recommend you open an issue on their repo.", "@albertvillanova Thanks for your recommendation. I'll do it" ]
2023-08-08T15:32:08
2023-08-11T13:35:09
2023-08-11T13:35:09
NONE
null
null
null
### Describe the bug This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib. ### Steps to reproduce the bug I use the following code to fine-tune Falcon on my private dataset. ```python import transformers from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoConfig, DataCollatorForSeq2Seq, Trainer, Seq2SeqTrainer, HfArgumentParser, Seq2SeqTrainingArguments, BitsAndBytesConfig, ) from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, set_peft_model_state_dict, ) import torch import os import evaluate import functools from datasets import load_dataset import bitsandbytes as bnb import logging import json import copy from typing import Dict, Optional, Sequence from dataclasses import dataclass, field # Lora settings LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT= 0.05 LORA_TARGET_MODULES = ["query_key_value"] @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B") @dataclass class DataArguments: data_path: str = field(default=None, metadata={"help": "Path to the training data."}) train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."}) num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."}) @dataclass class TrainingArguments(transformers.TrainingArguments): # cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, ) is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."}) def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True): result = tokenizer( text, truncation=True, max_length=max_seq_len, padding=False, return_tensors=None, ) if ( result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < max_seq_len and add_eos_token ): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) if add_eos_token and len(result["input_ids"]) >= max_seq_len: result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id result["attention_mask"][max_seq_len - 1] = 1 result["labels"] = result["input_ids"].copy() return result def main(): parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() config = AutoConfig.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, trust_remote_code=True, ) if training_args.is_lora: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, torch_dtype=torch.float16, trust_remote_code=True, load_in_8bit=True, quantization_config=BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0 ), ) model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=LORA_TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) else: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, torch_dtype=torch.float16, cache_dir=data_args.cache_path, trust_remote_code=True, ) model.config.use_cache = False def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) print_trainable_parameters(model) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, model_max_length=training_args.model_max_length, padding_side="left", use_fast=True, trust_remote_code=True, ) tokenizer.pad_token = tokenizer.eos_token # Load dataset def generate_and_tokenize_prompt(sample): input_text = sample["input"] target_text = sample["output"] + tokenizer.eos_token full_text = input_text + target_text tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512) tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512) input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:] return tokenized_full_text data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.eval_file is not None: data_files["eval"] = data_args.eval_file dataset = load_dataset(data_args.data_path, data_files=data_files) train_dataset = dataset["train"] eval_dataset = dataset["eval"] train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True) # Evaluation metrics def compute_metrics(eval_preds, tokenizer): metric = evaluate.load('exact_match') preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Replace -100s in the labels as we can't decode them labels[labels == -100] = tokenizer.pad_token_id decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Some simple post-processing decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [label.strip() for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {'exact_match': result['exact_match']} compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer) model = torch.compile(model) # Training trainer = Trainer( model=model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics_fn, ) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == "__main__": main() ``` When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error: ``` Traceback (most recent call last): File "falcon_sft.py", line 230, in <module> main() File "falcon_sft.py", line 223, in main trainer.train() File "python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__ current_batch = next(dataloader_iter) File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__ batch = self.__getitem__(keys) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table _check_valid_index_key(key, size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key _check_valid_index_key(int(max(key)), size=size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 88 is out of bounds for size 0 ``` So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`? ### Expected behavior I want to use `torch.compile` in my code. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6128/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6127/comments
https://api.github.com/repos/huggingface/datasets/issues/6127/events
https://github.com/huggingface/datasets/pull/6127
1,839,746,721
PR_kwDODunzps5XWdP5
6,127
Fix authentication issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006103 / 0.011353 (-0.005250) | 0.003588 / 0.011008 (-0.007420) | 0.080335 / 0.038508 (0.041827) | 0.059634 / 0.023109 (0.036525) | 0.356093 / 0.275898 (0.080195) | 0.407376 / 0.323480 (0.083896) | 0.005343 / 0.007986 (-0.002643) | 0.002928 / 0.004328 (-0.001400) | 0.062580 / 0.004250 (0.058330) | 0.047544 / 0.037052 (0.010491) | 0.364305 / 0.258489 (0.105816) | 0.421463 / 0.293841 (0.127623) | 0.027249 / 0.128546 (-0.101298) | 0.008010 / 0.075646 (-0.067636) | 0.262543 / 0.419271 (-0.156728) | 0.044978 / 0.043533 (0.001445) | 0.339344 / 0.255139 (0.084205) | 0.395288 / 0.283200 (0.112088) | 0.021425 / 0.141683 (-0.120258) | 1.439767 / 1.452155 (-0.012387) | 1.498081 / 1.492716 (0.005365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196976 / 0.018006 (0.178970) | 0.435383 / 0.000490 (0.434893) | 0.004559 / 0.000200 (0.004359) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013759) | 0.072944 / 0.014526 (0.058418) | 0.083651 / 0.176557 (-0.092906) | 0.144590 / 0.737135 (-0.592545) | 0.084844 / 0.296338 (-0.211494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398752 / 0.215209 (0.183543) | 3.959539 / 2.077655 (1.881884) | 1.935277 / 1.504120 (0.431157) | 1.751994 / 1.541195 (0.210799) | 1.828386 / 1.468490 (0.359896) | 0.500492 / 4.584777 (-4.084284) | 3.086630 / 3.745712 (-0.659082) | 2.851664 / 5.269862 (-2.418198) | 1.869792 / 4.565676 (-2.695885) | 0.058509 / 0.424275 (-0.365766) | 0.006500 / 0.007607 (-0.001107) | 0.467468 / 0.226044 (0.241424) | 4.686168 / 2.268929 (2.417240) | 2.427632 / 55.444624 (-53.016993) | 2.193194 / 6.876477 (-4.683283) | 2.408574 / 2.142072 (0.266501) | 0.592173 / 4.805227 (-4.213054) | 0.125381 / 6.500664 (-6.375283) | 0.060679 / 0.075469 (-0.014790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236066 / 1.841788 (-0.605722) | 18.591689 / 8.074308 (10.517381) | 14.138774 / 10.191392 (3.947382) | 0.147455 / 0.680424 (-0.532968) | 0.016921 / 0.534201 (-0.517280) | 0.328129 / 0.579283 (-0.251154) | 0.348872 / 0.434364 (-0.085491) | 0.380311 / 0.540337 (-0.160026) | 0.532901 / 1.386936 (-0.854035) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005438) | 0.003614 / 0.011008 (-0.007394) | 0.062857 / 0.038508 (0.024349) | 0.060633 / 0.023109 (0.037524) | 0.419684 / 0.275898 (0.143786) | 0.449025 / 0.323480 (0.125546) | 0.004595 / 0.007986 (-0.003391) | 0.002861 / 0.004328 (-0.001467) | 0.063253 / 0.004250 (0.059003) | 0.048770 / 0.037052 (0.011718) | 0.419838 / 0.258489 (0.161349) | 0.465183 / 0.293841 (0.171342) | 0.027350 / 0.128546 (-0.101196) | 0.008065 / 0.075646 (-0.067582) | 0.068321 / 0.419271 (-0.350950) | 0.041083 / 0.043533 (-0.002449) | 0.400831 / 0.255139 (0.145692) | 0.449286 / 0.283200 (0.166086) | 0.020472 / 0.141683 (-0.121210) | 1.437215 / 1.452155 (-0.014940) | 1.503679 / 1.492716 (0.010963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230764 / 0.018006 (0.212758) | 0.420774 / 0.000490 (0.420285) | 0.004012 / 0.000200 (0.003812) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026009 / 0.037411 (-0.011402) | 0.077943 / 0.014526 (0.063417) | 0.087281 / 0.176557 (-0.089276) | 0.139422 / 0.737135 (-0.597713) | 0.089090 / 0.296338 (-0.207248) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417298 / 0.215209 (0.202088) | 4.152303 / 2.077655 (2.074648) | 2.179996 / 1.504120 (0.675877) | 2.020619 / 1.541195 (0.479424) | 2.085241 / 1.468490 (0.616751) | 0.501111 / 4.584777 (-4.083666) | 3.079849 / 3.745712 (-0.665863) | 2.820607 / 5.269862 (-2.449255) | 1.863988 / 4.565676 (-2.701688) | 0.057662 / 0.424275 (-0.366613) | 0.006778 / 0.007607 (-0.000830) | 0.498661 / 0.226044 (0.272616) | 4.986503 / 2.268929 (2.717574) | 2.620676 / 55.444624 (-52.823949) | 2.297546 / 6.876477 (-4.578931) | 2.458148 / 2.142072 (0.316075) | 0.599490 / 4.805227 (-4.205738) | 0.125102 / 6.500664 (-6.375562) | 0.061411 / 0.075469 (-0.014059) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323816 / 1.841788 (-0.517971) | 18.462614 / 8.074308 (10.388306) | 13.845826 / 10.191392 (3.654434) | 0.146115 / 0.680424 (-0.534309) | 0.016862 / 0.534201 (-0.517339) | 0.335449 / 0.579283 (-0.243834) | 0.343792 / 0.434364 (-0.090572) | 0.394068 / 0.540337 (-0.146269) | 0.536378 / 1.386936 (-0.850558) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de3f00368c9236e9410821f5fddb95d6069883c1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004527) | 0.004005 / 0.011008 (-0.007003) | 0.085504 / 0.038508 (0.046996) | 0.077252 / 0.023109 (0.054143) | 0.351891 / 0.275898 (0.075993) | 0.383404 / 0.323480 (0.059924) | 0.004153 / 0.007986 (-0.003833) | 0.003344 / 0.004328 (-0.000985) | 0.064936 / 0.004250 (0.060685) | 0.057653 / 0.037052 (0.020601) | 0.368155 / 0.258489 (0.109666) | 0.406122 / 0.293841 (0.112282) | 0.032049 / 0.128546 (-0.096497) | 0.008698 / 0.075646 (-0.066949) | 0.292394 / 0.419271 (-0.126878) | 0.053634 / 0.043533 (0.010101) | 0.358273 / 0.255139 (0.103134) | 0.378441 / 0.283200 (0.095242) | 0.026928 / 0.141683 (-0.114755) | 1.458718 / 1.452155 (0.006563) | 1.536231 / 1.492716 (0.043515) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213956 / 0.018006 (0.195950) | 0.458620 / 0.000490 (0.458130) | 0.002718 / 0.000200 (0.002519) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083922 / 0.014526 (0.069396) | 0.152056 / 0.176557 (-0.024501) | 0.151584 / 0.737135 (-0.585552) | 0.095698 / 0.296338 (-0.200641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407762 / 0.215209 (0.192553) | 4.074324 / 2.077655 (1.996669) | 2.089929 / 1.504120 (0.585809) | 1.920024 / 1.541195 (0.378829) | 2.013410 / 1.468490 (0.544920) | 0.486056 / 4.584777 (-4.098721) | 3.656869 / 3.745712 (-0.088843) | 3.304008 / 5.269862 (-1.965854) | 2.074363 / 4.565676 (-2.491313) | 0.057293 / 0.424275 (-0.366982) | 0.007240 / 0.007607 (-0.000367) | 0.482696 / 0.226044 (0.256652) | 4.833251 / 2.268929 (2.564322) | 2.570391 / 55.444624 (-52.874233) | 2.220619 / 6.876477 (-4.655857) | 2.426316 / 2.142072 (0.284243) | 0.584811 / 4.805227 (-4.220416) | 0.134907 / 6.500664 (-6.365757) | 0.061115 / 0.075469 (-0.014354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251969 / 1.841788 (-0.589818) | 19.601611 / 8.074308 (11.527303) | 14.190217 / 10.191392 (3.998825) | 0.166296 / 0.680424 (-0.514128) | 0.018334 / 0.534201 (-0.515867) | 0.395172 / 0.579283 (-0.184111) | 0.410440 / 0.434364 (-0.023924) | 0.462263 / 0.540337 (-0.078074) | 0.645504 / 1.386936 (-0.741432) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004084 / 0.011008 (-0.006924) | 0.065208 / 0.038508 (0.026700) | 0.077809 / 0.023109 (0.054699) | 0.386472 / 0.275898 (0.110574) | 0.418686 / 0.323480 (0.095206) | 0.005346 / 0.007986 (-0.002640) | 0.003416 / 0.004328 (-0.000912) | 0.066209 / 0.004250 (0.061958) | 0.057517 / 0.037052 (0.020465) | 0.407684 / 0.258489 (0.149195) | 0.425438 / 0.293841 (0.131597) | 0.032166 / 0.128546 (-0.096380) | 0.008662 / 0.075646 (-0.066985) | 0.071712 / 0.419271 (-0.347560) | 0.049764 / 0.043533 (0.006231) | 0.394882 / 0.255139 (0.139743) | 0.403589 / 0.283200 (0.120389) | 0.023688 / 0.141683 (-0.117995) | 1.468488 / 1.452155 (0.016334) | 1.533118 / 1.492716 (0.040401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252949 / 0.018006 (0.234943) | 0.447355 / 0.000490 (0.446865) | 0.011721 / 0.000200 (0.011521) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031444 / 0.037411 (-0.005968) | 0.089390 / 0.014526 (0.074864) | 0.100103 / 0.176557 (-0.076454) | 0.153301 / 0.737135 (-0.583835) | 0.101336 / 0.296338 (-0.195003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408574 / 0.215209 (0.193365) | 4.073135 / 2.077655 (1.995480) | 2.086550 / 1.504120 (0.582430) | 1.930651 / 1.541195 (0.389457) | 2.013548 / 1.468490 (0.545058) | 0.477235 / 4.584777 (-4.107542) | 3.547545 / 3.745712 (-0.198167) | 3.321957 / 5.269862 (-1.947905) | 2.057705 / 4.565676 (-2.507971) | 0.056730 / 0.424275 (-0.367545) | 0.007882 / 0.007607 (0.000275) | 0.487297 / 0.226044 (0.261253) | 4.874184 / 2.268929 (2.605255) | 2.631129 / 55.444624 (-52.813496) | 2.235755 / 6.876477 (-4.640722) | 2.463329 / 2.142072 (0.321257) | 0.578308 / 4.805227 (-4.226919) | 0.132726 / 6.500664 (-6.367938) | 0.064883 / 0.075469 (-0.010586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347564 / 1.841788 (-0.494223) | 20.192973 / 8.074308 (12.118665) | 14.563553 / 10.191392 (4.372161) | 0.168244 / 0.680424 (-0.512180) | 0.018638 / 0.534201 (-0.515563) | 0.394789 / 0.579283 (-0.184494) | 0.419677 / 0.434364 (-0.014687) | 0.480274 / 0.540337 (-0.060063) | 0.641204 / 1.386936 (-0.745732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c7a0d56b60bf700d6a491fa30eaf66500969315 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005939 / 0.011353 (-0.005413) | 0.003457 / 0.011008 (-0.007551) | 0.079985 / 0.038508 (0.041477) | 0.056492 / 0.023109 (0.033383) | 0.312356 / 0.275898 (0.036458) | 0.354038 / 0.323480 (0.030558) | 0.004551 / 0.007986 (-0.003435) | 0.002828 / 0.004328 (-0.001501) | 0.062369 / 0.004250 (0.058119) | 0.044712 / 0.037052 (0.007660) | 0.318244 / 0.258489 (0.059755) | 0.361977 / 0.293841 (0.068136) | 0.026460 / 0.128546 (-0.102086) | 0.007928 / 0.075646 (-0.067719) | 0.261378 / 0.419271 (-0.157894) | 0.044209 / 0.043533 (0.000676) | 0.313931 / 0.255139 (0.058792) | 0.339553 / 0.283200 (0.056354) | 0.019776 / 0.141683 (-0.121907) | 1.443126 / 1.452155 (-0.009029) | 1.508149 / 1.492716 (0.015432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183801 / 0.018006 (0.165795) | 0.427967 / 0.000490 (0.427477) | 0.002028 / 0.000200 (0.001828) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023697 / 0.037411 (-0.013715) | 0.072128 / 0.014526 (0.057602) | 0.083701 / 0.176557 (-0.092855) | 0.142821 / 0.737135 (-0.594315) | 0.082276 / 0.296338 (-0.214063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434427 / 0.215209 (0.219218) | 4.325962 / 2.077655 (2.248308) | 2.277115 / 1.504120 (0.772995) | 2.093736 / 1.541195 (0.552541) | 2.127984 / 1.468490 (0.659494) | 0.502336 / 4.584777 (-4.082441) | 3.023243 / 3.745712 (-0.722469) | 2.805154 / 5.269862 (-2.464708) | 1.821273 / 4.565676 (-2.744403) | 0.057480 / 0.424275 (-0.366795) | 0.006365 / 0.007607 (-0.001242) | 0.508258 / 0.226044 (0.282213) | 5.087950 / 2.268929 (2.819022) | 2.705029 / 55.444624 (-52.739596) | 2.378392 / 6.876477 (-4.498085) | 2.515380 / 2.142072 (0.373307) | 0.589283 / 4.805227 (-4.215944) | 0.125719 / 6.500664 (-6.374945) | 0.061074 / 0.075469 (-0.014395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221895 / 1.841788 (-0.619893) | 18.025917 / 8.074308 (9.951609) | 13.556901 / 10.191392 (3.365509) | 0.142614 / 0.680424 (-0.537809) | 0.016731 / 0.534201 (-0.517469) | 0.328374 / 0.579283 (-0.250910) | 0.342553 / 0.434364 (-0.091811) | 0.374502 / 0.540337 (-0.165836) | 0.534173 / 1.386936 (-0.852763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005817 / 0.011353 (-0.005536) | 0.003500 / 0.011008 (-0.007509) | 0.062240 / 0.038508 (0.023732) | 0.058128 / 0.023109 (0.035019) | 0.424014 / 0.275898 (0.148116) | 0.468453 / 0.323480 (0.144973) | 0.004641 / 0.007986 (-0.003345) | 0.002821 / 0.004328 (-0.001508) | 0.062180 / 0.004250 (0.057930) | 0.047578 / 0.037052 (0.010526) | 0.427367 / 0.258489 (0.168878) | 0.467889 / 0.293841 (0.174048) | 0.027144 / 0.128546 (-0.101403) | 0.007969 / 0.075646 (-0.067678) | 0.067764 / 0.419271 (-0.351508) | 0.040719 / 0.043533 (-0.002814) | 0.423663 / 0.255139 (0.168524) | 0.458556 / 0.283200 (0.175356) | 0.019196 / 0.141683 (-0.122487) | 1.471546 / 1.452155 (0.019392) | 1.547541 / 1.492716 (0.054825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228777 / 0.018006 (0.210770) | 0.406663 / 0.000490 (0.406173) | 0.003688 / 0.000200 (0.003488) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025494 / 0.037411 (-0.011917) | 0.076339 / 0.014526 (0.061814) | 0.084233 / 0.176557 (-0.092324) | 0.136995 / 0.737135 (-0.600140) | 0.085443 / 0.296338 (-0.210895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420441 / 0.215209 (0.205232) | 4.187018 / 2.077655 (2.109363) | 2.142139 / 1.504120 (0.638019) | 1.974530 / 1.541195 (0.433335) | 2.027321 / 1.468490 (0.558831) | 0.498116 / 4.584777 (-4.086661) | 2.988514 / 3.745712 (-0.757198) | 2.782046 / 5.269862 (-2.487816) | 1.821725 / 4.565676 (-2.743951) | 0.057711 / 0.424275 (-0.366564) | 0.006664 / 0.007607 (-0.000944) | 0.491015 / 0.226044 (0.264971) | 4.921037 / 2.268929 (2.652108) | 2.574964 / 55.444624 (-52.869661) | 2.251703 / 6.876477 (-4.624774) | 2.361154 / 2.142072 (0.219082) | 0.593362 / 4.805227 (-4.211865) | 0.126107 / 6.500664 (-6.374557) | 0.061840 / 0.075469 (-0.013630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327459 / 1.841788 (-0.514328) | 18.062960 / 8.074308 (9.988652) | 13.669253 / 10.191392 (3.477861) | 0.130719 / 0.680424 (-0.549705) | 0.016564 / 0.534201 (-0.517637) | 0.335821 / 0.579283 (-0.243462) | 0.341691 / 0.434364 (-0.092673) | 0.392651 / 0.540337 (-0.147686) | 0.529650 / 1.386936 (-0.857286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65806b0542996e56825ab46a3ce8f9c07ab0df3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009625 / 0.011353 (-0.001728) | 0.005354 / 0.011008 (-0.005654) | 0.114350 / 0.038508 (0.075842) | 0.086637 / 0.023109 (0.063528) | 0.465381 / 0.275898 (0.189483) | 0.490411 / 0.323480 (0.166931) | 0.006575 / 0.007986 (-0.001411) | 0.004287 / 0.004328 (-0.000041) | 0.093134 / 0.004250 (0.088884) | 0.060209 / 0.037052 (0.023156) | 0.459570 / 0.258489 (0.201080) | 0.523320 / 0.293841 (0.229479) | 0.047943 / 0.128546 (-0.080603) | 0.014764 / 0.075646 (-0.060882) | 0.383887 / 0.419271 (-0.035384) | 0.069864 / 0.043533 (0.026331) | 0.469122 / 0.255139 (0.213983) | 0.509953 / 0.283200 (0.226753) | 0.037800 / 0.141683 (-0.103883) | 1.877589 / 1.452155 (0.425434) | 2.014913 / 1.492716 (0.522197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.309146 / 0.018006 (0.291140) | 0.644390 / 0.000490 (0.643900) | 0.005017 / 0.000200 (0.004817) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032964 / 0.037411 (-0.004447) | 0.103236 / 0.014526 (0.088711) | 0.119950 / 0.176557 (-0.056607) | 0.207674 / 0.737135 (-0.529461) | 0.117278 / 0.296338 (-0.179060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605464 / 0.215209 (0.390255) | 6.027805 / 2.077655 (3.950150) | 2.719725 / 1.504120 (1.215605) | 2.262752 / 1.541195 (0.721558) | 2.330310 / 1.468490 (0.861820) | 0.862537 / 4.584777 (-3.722240) | 5.347080 / 3.745712 (1.601368) | 4.792170 / 5.269862 (-0.477691) | 3.103694 / 4.565676 (-1.461983) | 0.103646 / 0.424275 (-0.320629) | 0.009411 / 0.007607 (0.001804) | 0.743052 / 0.226044 (0.517008) | 7.289684 / 2.268929 (5.020755) | 3.436530 / 55.444624 (-52.008094) | 2.722440 / 6.876477 (-4.154036) | 2.952380 / 2.142072 (0.810308) | 1.047688 / 4.805227 (-3.757539) | 0.212724 / 6.500664 (-6.287940) | 0.081473 / 0.075469 (0.006004) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714437 / 1.841788 (-0.127351) | 24.384330 / 8.074308 (16.310022) | 22.444162 / 10.191392 (12.252770) | 0.226264 / 0.680424 (-0.454160) | 0.030530 / 0.534201 (-0.503671) | 0.473999 / 0.579283 (-0.105284) | 0.575005 / 0.434364 (0.140641) | 0.542789 / 0.540337 (0.002451) | 0.776079 / 1.386936 (-0.610857) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009366 / 0.011353 (-0.001987) | 0.005239 / 0.011008 (-0.005769) | 0.085116 / 0.038508 (0.046608) | 0.089600 / 0.023109 (0.066491) | 0.485778 / 0.275898 (0.209880) | 0.540054 / 0.323480 (0.216574) | 0.006290 / 0.007986 (-0.001695) | 0.004054 / 0.004328 (-0.000274) | 0.083535 / 0.004250 (0.079284) | 0.067200 / 0.037052 (0.030148) | 0.519520 / 0.258489 (0.261031) | 0.544049 / 0.293841 (0.250208) | 0.054300 / 0.128546 (-0.074246) | 0.013650 / 0.075646 (-0.061996) | 0.102515 / 0.419271 (-0.316757) | 0.063054 / 0.043533 (0.019522) | 0.491724 / 0.255139 (0.236585) | 0.547498 / 0.283200 (0.264298) | 0.039266 / 0.141683 (-0.102416) | 1.801226 / 1.452155 (0.349071) | 1.861778 / 1.492716 (0.369061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313009 / 0.018006 (0.295003) | 0.587695 / 0.000490 (0.587205) | 0.004972 / 0.000200 (0.004772) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.091154 / 0.014526 (0.076628) | 0.110505 / 0.176557 (-0.066052) | 0.164204 / 0.737135 (-0.572932) | 0.107812 / 0.296338 (-0.188526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610535 / 0.215209 (0.395326) | 6.162517 / 2.077655 (4.084862) | 2.866718 / 1.504120 (1.362598) | 2.542412 / 1.541195 (1.001218) | 2.584136 / 1.468490 (1.115645) | 0.874319 / 4.584777 (-3.710458) | 5.257184 / 3.745712 (1.511472) | 4.705840 / 5.269862 (-0.564022) | 2.971708 / 4.565676 (-1.593969) | 0.099026 / 0.424275 (-0.325249) | 0.009142 / 0.007607 (0.001535) | 0.728660 / 0.226044 (0.502615) | 7.560922 / 2.268929 (5.291994) | 3.439521 / 55.444624 (-52.005103) | 2.854730 / 6.876477 (-4.021746) | 3.088951 / 2.142072 (0.946879) | 0.973621 / 4.805227 (-3.831606) | 0.209792 / 6.500664 (-6.290872) | 0.081107 / 0.075469 (0.005638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716809 / 1.841788 (-0.124978) | 24.386927 / 8.074308 (16.312619) | 20.715524 / 10.191392 (10.524131) | 0.260831 / 0.680424 (-0.419592) | 0.030701 / 0.534201 (-0.503500) | 0.490018 / 0.579283 (-0.089265) | 0.590424 / 0.434364 (0.156060) | 0.589942 / 0.540337 (0.049604) | 0.798094 / 1.386936 (-0.588842) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0a77dc943de68a17f23f141517028c734c78623 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.003880 / 0.011008 (-0.007128) | 0.083761 / 0.038508 (0.045253) | 0.075966 / 0.023109 (0.052857) | 0.315291 / 0.275898 (0.039393) | 0.355920 / 0.323480 (0.032440) | 0.004972 / 0.007986 (-0.003014) | 0.003053 / 0.004328 (-0.001275) | 0.063553 / 0.004250 (0.059302) | 0.050794 / 0.037052 (0.013742) | 0.317681 / 0.258489 (0.059192) | 0.361991 / 0.293841 (0.068150) | 0.028119 / 0.128546 (-0.100427) | 0.008203 / 0.075646 (-0.067443) | 0.271756 / 0.419271 (-0.147516) | 0.046701 / 0.043533 (0.003168) | 0.316520 / 0.255139 (0.061381) | 0.350499 / 0.283200 (0.067300) | 0.022399 / 0.141683 (-0.119284) | 1.416017 / 1.452155 (-0.036138) | 1.503087 / 1.492716 (0.010371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208250 / 0.018006 (0.190244) | 0.470345 / 0.000490 (0.469856) | 0.003687 / 0.000200 (0.003487) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026163 / 0.037411 (-0.011248) | 0.083315 / 0.014526 (0.068789) | 0.088541 / 0.176557 (-0.088015) | 0.150078 / 0.737135 (-0.587057) | 0.088862 / 0.296338 (-0.207476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404911 / 0.215209 (0.189702) | 4.059257 / 2.077655 (1.981602) | 1.890987 / 1.504120 (0.386867) | 1.726608 / 1.541195 (0.185413) | 1.767479 / 1.468490 (0.298989) | 0.518826 / 4.584777 (-4.065951) | 3.212145 / 3.745712 (-0.533567) | 3.029933 / 5.269862 (-2.239929) | 2.000203 / 4.565676 (-2.565474) | 0.059631 / 0.424275 (-0.364644) | 0.006707 / 0.007607 (-0.000900) | 0.485741 / 0.226044 (0.259697) | 4.871938 / 2.268929 (2.603010) | 2.418856 / 55.444624 (-53.025769) | 2.084847 / 6.876477 (-4.791630) | 2.207992 / 2.142072 (0.065920) | 0.614354 / 4.805227 (-4.190873) | 0.128932 / 6.500664 (-6.371732) | 0.062342 / 0.075469 (-0.013127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325792 / 1.841788 (-0.515995) | 19.718995 / 8.074308 (11.644687) | 15.278535 / 10.191392 (5.087143) | 0.146719 / 0.680424 (-0.533705) | 0.017718 / 0.534201 (-0.516483) | 0.335709 / 0.579283 (-0.243574) | 0.378060 / 0.434364 (-0.056304) | 0.391135 / 0.540337 (-0.149202) | 0.548045 / 1.386936 (-0.838891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003742 / 0.011008 (-0.007266) | 0.064405 / 0.038508 (0.025897) | 0.077618 / 0.023109 (0.054509) | 0.365325 / 0.275898 (0.089427) | 0.408109 / 0.323480 (0.084629) | 0.004909 / 0.007986 (-0.003076) | 0.002972 / 0.004328 (-0.001356) | 0.063933 / 0.004250 (0.059682) | 0.052916 / 0.037052 (0.015863) | 0.370891 / 0.258489 (0.112402) | 0.412134 / 0.293841 (0.118293) | 0.028171 / 0.128546 (-0.100375) | 0.008150 / 0.075646 (-0.067497) | 0.069248 / 0.419271 (-0.350024) | 0.042353 / 0.043533 (-0.001180) | 0.368117 / 0.255139 (0.112978) | 0.397548 / 0.283200 (0.114348) | 0.022967 / 0.141683 (-0.118716) | 1.472740 / 1.452155 (0.020586) | 1.524028 / 1.492716 (0.031311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256854 / 0.018006 (0.238848) | 0.471499 / 0.000490 (0.471009) | 0.009609 / 0.000200 (0.009409) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009433) | 0.086741 / 0.014526 (0.072215) | 0.091189 / 0.176557 (-0.085368) | 0.146117 / 0.737135 (-0.591018) | 0.092358 / 0.296338 (-0.203980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426356 / 0.215209 (0.211147) | 4.263782 / 2.077655 (2.186127) | 2.178198 / 1.504120 (0.674078) | 2.015405 / 1.541195 (0.474211) | 2.055966 / 1.468490 (0.587476) | 0.507531 / 4.584777 (-4.077246) | 3.175967 / 3.745712 (-0.569745) | 3.055697 / 5.269862 (-2.214165) | 1.987663 / 4.565676 (-2.578014) | 0.058452 / 0.424275 (-0.365823) | 0.006944 / 0.007607 (-0.000663) | 0.502534 / 0.226044 (0.276489) | 5.024693 / 2.268929 (2.755765) | 2.754971 / 55.444624 (-52.689653) | 2.470845 / 6.876477 (-4.405632) | 2.698675 / 2.142072 (0.556602) | 0.602357 / 4.805227 (-4.202871) | 0.129490 / 6.500664 (-6.371174) | 0.065127 / 0.075469 (-0.010342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398487 / 1.841788 (-0.443301) | 19.692279 / 8.074308 (11.617971) | 15.124064 / 10.191392 (4.932672) | 0.148938 / 0.680424 (-0.531486) | 0.017418 / 0.534201 (-0.516783) | 0.340480 / 0.579283 (-0.238803) | 0.377223 / 0.434364 (-0.057141) | 0.405303 / 0.540337 (-0.135034) | 0.548923 / 1.386936 (-0.838013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58e62af004b6b8b84dcfd897a4bc71637cfa6c3f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006433 / 0.011353 (-0.004920) | 0.004002 / 0.011008 (-0.007006) | 0.084130 / 0.038508 (0.045622) | 0.070628 / 0.023109 (0.047519) | 0.312372 / 0.275898 (0.036474) | 0.343993 / 0.323480 (0.020513) | 0.003936 / 0.007986 (-0.004050) | 0.003336 / 0.004328 (-0.000993) | 0.064715 / 0.004250 (0.060465) | 0.052511 / 0.037052 (0.015458) | 0.314092 / 0.258489 (0.055603) | 0.363152 / 0.293841 (0.069311) | 0.030898 / 0.128546 (-0.097648) | 0.008396 / 0.075646 (-0.067250) | 0.288083 / 0.419271 (-0.131188) | 0.051654 / 0.043533 (0.008122) | 0.315252 / 0.255139 (0.060113) | 0.346756 / 0.283200 (0.063556) | 0.025167 / 0.141683 (-0.116515) | 1.487265 / 1.452155 (0.035110) | 1.557528 / 1.492716 (0.064812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206517 / 0.018006 (0.188510) | 0.458359 / 0.000490 (0.457869) | 0.003719 / 0.000200 (0.003519) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029631 / 0.037411 (-0.007780) | 0.083856 / 0.014526 (0.069330) | 0.340431 / 0.176557 (0.163875) | 0.153864 / 0.737135 (-0.583271) | 0.095951 / 0.296338 (-0.200388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379182 / 0.215209 (0.163973) | 3.783396 / 2.077655 (1.705741) | 1.835932 / 1.504120 (0.331813) | 1.667563 / 1.541195 (0.126369) | 1.739309 / 1.468490 (0.270818) | 0.478957 / 4.584777 (-4.105820) | 3.521974 / 3.745712 (-0.223738) | 3.237635 / 5.269862 (-2.032227) | 2.000300 / 4.565676 (-2.565377) | 0.056389 / 0.424275 (-0.367887) | 0.007242 / 0.007607 (-0.000365) | 0.452642 / 0.226044 (0.226598) | 4.524339 / 2.268929 (2.255411) | 2.346210 / 55.444624 (-53.098414) | 1.957196 / 6.876477 (-4.919281) | 2.180051 / 2.142072 (0.037979) | 0.570205 / 4.805227 (-4.235022) | 0.131346 / 6.500664 (-6.369318) | 0.059327 / 0.075469 (-0.016142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244709 / 1.841788 (-0.597079) | 19.566277 / 8.074308 (11.491969) | 14.172598 / 10.191392 (3.981206) | 0.166493 / 0.680424 (-0.513931) | 0.018281 / 0.534201 (-0.515920) | 0.391608 / 0.579283 (-0.187675) | 0.402642 / 0.434364 (-0.031722) | 0.464974 / 0.540337 (-0.075364) | 0.637565 / 1.386936 (-0.749371) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006929 / 0.011353 (-0.004424) | 0.004114 / 0.011008 (-0.006894) | 0.064589 / 0.038508 (0.026081) | 0.083334 / 0.023109 (0.060225) | 0.391280 / 0.275898 (0.115382) | 0.426157 / 0.323480 (0.102678) | 0.005336 / 0.007986 (-0.002650) | 0.003395 / 0.004328 (-0.000934) | 0.064560 / 0.004250 (0.060310) | 0.057094 / 0.037052 (0.020042) | 0.398959 / 0.258489 (0.140470) | 0.432470 / 0.293841 (0.138629) | 0.031412 / 0.128546 (-0.097134) | 0.008670 / 0.075646 (-0.066976) | 0.071249 / 0.419271 (-0.348022) | 0.048934 / 0.043533 (0.005401) | 0.384207 / 0.255139 (0.129068) | 0.407992 / 0.283200 (0.124792) | 0.024492 / 0.141683 (-0.117191) | 1.467788 / 1.452155 (0.015634) | 1.541011 / 1.492716 (0.048295) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279607 / 0.018006 (0.261600) | 0.448899 / 0.000490 (0.448410) | 0.020990 / 0.000200 (0.020790) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030313 / 0.037411 (-0.007099) | 0.089209 / 0.014526 (0.074684) | 0.101024 / 0.176557 (-0.075532) | 0.153468 / 0.737135 (-0.583667) | 0.103219 / 0.296338 (-0.193120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429176 / 0.215209 (0.213967) | 4.302234 / 2.077655 (2.224580) | 2.291103 / 1.504120 (0.786983) | 2.126257 / 1.541195 (0.585062) | 2.207090 / 1.468490 (0.738600) | 0.484643 / 4.584777 (-4.100134) | 3.557429 / 3.745712 (-0.188283) | 3.253804 / 5.269862 (-2.016058) | 2.026087 / 4.565676 (-2.539589) | 0.057793 / 0.424275 (-0.366482) | 0.007761 / 0.007607 (0.000154) | 0.504819 / 0.226044 (0.278775) | 5.046868 / 2.268929 (2.777940) | 2.773149 / 55.444624 (-52.671475) | 2.398036 / 6.876477 (-4.478440) | 2.608094 / 2.142072 (0.466021) | 0.630499 / 4.805227 (-4.174729) | 0.135496 / 6.500664 (-6.365168) | 0.061329 / 0.075469 (-0.014140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327124 / 1.841788 (-0.514664) | 19.889796 / 8.074308 (11.815488) | 14.196100 / 10.191392 (4.004708) | 0.161963 / 0.680424 (-0.518461) | 0.018529 / 0.534201 (-0.515672) | 0.392325 / 0.579283 (-0.186958) | 0.404836 / 0.434364 (-0.029528) | 0.475898 / 0.540337 (-0.064439) | 0.633563 / 1.386936 (-0.753373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4684fc1032321abf0d494b0c130ea7c82ebda80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006390 / 0.011353 (-0.004963) | 0.003683 / 0.011008 (-0.007325) | 0.081274 / 0.038508 (0.042766) | 0.062193 / 0.023109 (0.039083) | 0.355360 / 0.275898 (0.079462) | 0.396471 / 0.323480 (0.072992) | 0.003569 / 0.007986 (-0.004416) | 0.003928 / 0.004328 (-0.000400) | 0.062292 / 0.004250 (0.058041) | 0.049700 / 0.037052 (0.012648) | 0.354604 / 0.258489 (0.096115) | 0.419436 / 0.293841 (0.125595) | 0.027151 / 0.128546 (-0.101395) | 0.007954 / 0.075646 (-0.067692) | 0.262231 / 0.419271 (-0.157041) | 0.045483 / 0.043533 (0.001950) | 0.354285 / 0.255139 (0.099146) | 0.385178 / 0.283200 (0.101978) | 0.021183 / 0.141683 (-0.120500) | 1.420785 / 1.452155 (-0.031370) | 1.531545 / 1.492716 (0.038829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202298 / 0.018006 (0.184292) | 0.442172 / 0.000490 (0.441683) | 0.003565 / 0.000200 (0.003366) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024229 / 0.037411 (-0.013183) | 0.074352 / 0.014526 (0.059826) | 0.087530 / 0.176557 (-0.089026) | 0.146478 / 0.737135 (-0.590658) | 0.085145 / 0.296338 (-0.211194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388395 / 0.215209 (0.173186) | 3.877623 / 2.077655 (1.799968) | 1.882444 / 1.504120 (0.378324) | 1.707871 / 1.541195 (0.166676) | 1.772132 / 1.468490 (0.303642) | 0.491937 / 4.584777 (-4.092840) | 3.057947 / 3.745712 (-0.687765) | 2.822390 / 5.269862 (-2.447471) | 1.879719 / 4.565676 (-2.685957) | 0.056830 / 0.424275 (-0.367445) | 0.006415 / 0.007607 (-0.001192) | 0.458945 / 0.226044 (0.232900) | 4.594502 / 2.268929 (2.325574) | 2.339677 / 55.444624 (-53.104948) | 1.983750 / 6.876477 (-4.892727) | 2.173792 / 2.142072 (0.031719) | 0.580390 / 4.805227 (-4.224838) | 0.124568 / 6.500664 (-6.376096) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265108 / 1.841788 (-0.576680) | 18.415254 / 8.074308 (10.340946) | 13.963829 / 10.191392 (3.772437) | 0.148926 / 0.680424 (-0.531498) | 0.016919 / 0.534201 (-0.517282) | 0.331082 / 0.579283 (-0.248201) | 0.345777 / 0.434364 (-0.088587) | 0.381123 / 0.540337 (-0.159214) | 0.543297 / 1.386936 (-0.843639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003717 / 0.011008 (-0.007291) | 0.063653 / 0.038508 (0.025144) | 0.063723 / 0.023109 (0.040613) | 0.360233 / 0.275898 (0.084335) | 0.398353 / 0.323480 (0.074873) | 0.004696 / 0.007986 (-0.003290) | 0.002876 / 0.004328 (-0.001452) | 0.063057 / 0.004250 (0.058806) | 0.050258 / 0.037052 (0.013206) | 0.362946 / 0.258489 (0.104457) | 0.403260 / 0.293841 (0.109419) | 0.027738 / 0.128546 (-0.100809) | 0.008025 / 0.075646 (-0.067621) | 0.068781 / 0.419271 (-0.350491) | 0.042114 / 0.043533 (-0.001419) | 0.363546 / 0.255139 (0.108407) | 0.385640 / 0.283200 (0.102440) | 0.021757 / 0.141683 (-0.119926) | 1.482364 / 1.452155 (0.030209) | 1.571859 / 1.492716 (0.079143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235628 / 0.018006 (0.217622) | 0.439909 / 0.000490 (0.439419) | 0.003070 / 0.000200 (0.002870) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027045 / 0.037411 (-0.010366) | 0.080413 / 0.014526 (0.065887) | 0.088953 / 0.176557 (-0.087603) | 0.141907 / 0.737135 (-0.595228) | 0.090604 / 0.296338 (-0.205735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423250 / 0.215209 (0.208041) | 4.216510 / 2.077655 (2.138855) | 2.162946 / 1.504120 (0.658826) | 2.014561 / 1.541195 (0.473366) | 2.086347 / 1.468490 (0.617857) | 0.496591 / 4.584777 (-4.088186) | 3.089594 / 3.745712 (-0.656118) | 2.853640 / 5.269862 (-2.416221) | 1.878149 / 4.565676 (-2.687527) | 0.056914 / 0.424275 (-0.367361) | 0.006762 / 0.007607 (-0.000845) | 0.493470 / 0.226044 (0.267426) | 4.929966 / 2.268929 (2.661037) | 2.640885 / 55.444624 (-52.803739) | 2.335950 / 6.876477 (-4.540527) | 2.565866 / 2.142072 (0.423793) | 0.585433 / 4.805227 (-4.219794) | 0.124969 / 6.500664 (-6.375695) | 0.062361 / 0.075469 (-0.013108) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369144 / 1.841788 (-0.472644) | 19.037582 / 8.074308 (10.963274) | 14.069141 / 10.191392 (3.877749) | 0.146469 / 0.680424 (-0.533954) | 0.016911 / 0.534201 (-0.517290) | 0.336802 / 0.579283 (-0.242482) | 0.336411 / 0.434364 (-0.097953) | 0.392360 / 0.540337 (-0.147977) | 0.536078 / 1.386936 (-0.850858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12cfc1196e62847e2e8239fbd727a02cbc86ddec \"CML watermark\")\n" ]
2023-08-07T15:41:25
2023-08-08T15:24:59
2023-08-08T15:16:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6127", "html_url": "https://github.com/huggingface/datasets/pull/6127", "diff_url": "https://github.com/huggingface/datasets/pull/6127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6127.patch", "merged_at": "2023-08-08T15:16:22" }
This PR fixes 3 authentication issues: - Fix authentication when passing `token`. - Fix authentication in `Audio.decode_example` and `Image.decode_example`. - Fix authentication to resolve `data_files` in repositories without script. This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`. Fix #6126. ## Details ### Fix authentication when passing `token` See c0a77dc943de68a17f23f141517028c734c78623 The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`: ```python download_config.token = token ``` As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`. This fixes authentication issues in the following functions: - `load_dataset` and `load_dataset_builder` - `Dataset.push_to_hub` and `Dataset.push_to_hub` - `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names` ### Fix authentication in `Audio.decode_example` and `Image.decode_example`. See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`) ### Fix authentication to resolve `data_files` in repositories without script See: e4684fc1032321abf0d494b0c130ea7c82ebda80 This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6127/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6126/comments
https://api.github.com/repos/huggingface/datasets/issues/6126/events
https://github.com/huggingface/datasets/issues/6126
1,839,675,320
I_kwDODunzps5tpze4
6,126
Private datasets do not load when passing token
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.", "I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything worked well even with `DownloadMode.FORCE_REDOWNLOAD`).", "We are planning to do a patch release today, after the merge of the fix:\r\n- #6127\r\n\r\nIn the meantime, the problem can be circumvented by passing `download_config` instead:\r\n```python\r\nfrom datasets import DownloadConfig, load_dataset\r\n\r\nload_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n``` ", "> We are planning to do a patch release today, after the merge of the fix:\r\n> \r\n> * [Fix authentication issues #6127](https://github.com/huggingface/datasets/pull/6127)\r\n> \r\n> \r\n> In the meantime, the problem can be circumvented by passing `download_config` instead:\r\n> \r\n> ```python\r\n> from datasets import DownloadConfig, load_dataset\r\n> \r\n> load_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n> ```\r\n\r\nThis did not work for me (there was some other error with the split being an unexpected size 0). Downgrading to 2.13 fixed it...." ]
2023-08-07T15:06:47
2023-08-08T15:16:23
2023-08-08T15:16:23
MEMBER
null
null
null
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets import DownloadConfig, load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>")) ds ``` gives ``` Dataset({ features: ['text'], num_rows: 4 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") ``` gives ``` --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) [<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1793 download_config = download_config.copy() if download_config else DownloadConfig() 1794 download_config.storage_options.update(storage_options) -> 1795 dataset_module = dataset_module_factory( 1796 path, 1797 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1485 if isinstance(e1, EmptyDatasetError): -> 1486 raise e1 from None 1487 if isinstance(e1, FileNotFoundError): 1488 raise FileNotFoundError( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1474 download_config=download_config, 1475 download_mode=download_mode, -> 1476 ).get_module() 1477 except ( 1478 Exception [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self) 1030 sanitize_patterns(self.data_files) 1031 if self.data_files is not None -> 1032 else get_data_patterns(base_path, download_config=self.download_config) 1033 ) 1034 data_files = DataFilesDict.from_patterns( [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config) 457 return _get_data_files_patterns(resolver) 458 except FileNotFoundError: --> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 460 461 EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files ``` ### Expected behavior The dataset should load. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6126/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6125/comments
https://api.github.com/repos/huggingface/datasets/issues/6125/events
https://github.com/huggingface/datasets/issues/6125
1,837,980,986
I_kwDODunzps5tjV06
6,125
Reinforcement Learning and Robotics are not task categories in HF datasets metadata
{ "login": "StoneT2000", "id": 35373228, "node_id": "MDQ6VXNlcjM1MzczMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StoneT2000", "html_url": "https://github.com/StoneT2000", "followers_url": "https://api.github.com/users/StoneT2000/followers", "following_url": "https://api.github.com/users/StoneT2000/following{/other_user}", "gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions", "organizations_url": "https://api.github.com/users/StoneT2000/orgs", "repos_url": "https://api.github.com/users/StoneT2000/repos", "events_url": "https://api.github.com/users/StoneT2000/events{/privacy}", "received_events_url": "https://api.github.com/users/StoneT2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-08-05T23:59:42
2023-08-18T12:28:42
2023-08-18T12:28:42
NONE
null
null
null
### Describe the bug In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves. ### Steps to reproduce the bug 1. Create a new dataset on Hugging face 2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit ### Expected behavior Expected to be able to add RL and robotics as task categories as some previous datasets have these tags ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6125/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6124/comments
https://api.github.com/repos/huggingface/datasets/issues/6124/events
https://github.com/huggingface/datasets/issues/6124
1,837,868,112
I_kwDODunzps5ti6RQ
6,124
Datasets crashing runs due to KeyError
{ "login": "conceptofmind", "id": 25208228, "node_id": "MDQ6VXNlcjI1MjA4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conceptofmind", "html_url": "https://github.com/conceptofmind", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "repos_url": "https://api.github.com/users/conceptofmind/repos", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "i once had the same error and I could fix that by pushing a fake or a dummy commit on my hugging face dataset repo", "Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?", "> Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?\r\n\r\nHi Mario,\r\n\r\nUnfortunately, the dataset in question is currently private until the model is trained and released.\r\n\r\nThis is not happening with one dataset but numerous hosted private datasets.\r\n\r\nI am only loading the dataset and doing nothing else currently. It seems to happen completely sporadically.\r\n\r\nThank you,\r\n\r\nEnrico" ]
2023-08-05T17:48:56
2023-08-20T17:33:15
null
NONE
null
null
null
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call last): main() train_dataset = load_dataset( ^^^^^^^^^^^^^ builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ raise e1 from None ).get_module() ^^^^^^^^^^^^ else get_data_patterns(base_path, download_config=self.download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return _get_data_files_patterns(resolver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ data_files = pattern_resolver(pattern) ^^^^^^^^^^^^^^^^^^^^^^^^^ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)] ^^^^^^^^^^^^^^ allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs): listing = self.ls(path, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "last_modified": parse_datetime(tree_item["lastCommit"]["date"]), ~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'lastCommit' ``` Any help would be greatly appreciated. Thank you, Enrico ### Steps to reproduce the bug Load the dataset from the Huggingface hub. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` ### Expected behavior Loads the dataset. ### Environment info datasets-2.14.3 CUDA 11.8 Python 3.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6124/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6123/comments
https://api.github.com/repos/huggingface/datasets/issues/6123/events
https://github.com/huggingface/datasets/issues/6123
1,837,789,294
I_kwDODunzps5tinBu
6,123
Inaccurate Bounding Boxes in "wildreceipt" Dataset
{ "login": "HamzaGbada", "id": 50714796, "node_id": "MDQ6VXNlcjUwNzE0Nzk2", "avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HamzaGbada", "html_url": "https://github.com/HamzaGbada", "followers_url": "https://api.github.com/users/HamzaGbada/followers", "following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}", "gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}", "starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions", "organizations_url": "https://api.github.com/users/HamzaGbada/orgs", "repos_url": "https://api.github.com/users/HamzaGbada/repos", "events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}", "received_events_url": "https://api.github.com/users/HamzaGbada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Thanks for the investigation, but we are not the authors of these datasets, so please report this on the Hub instead so that the actual authors can fix it." ]
2023-08-05T14:34:13
2023-08-17T14:25:27
2023-08-17T14:25:26
NONE
null
null
null
### Describe the bug I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset. To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face: **Example 1:** ![image](https://github.com/huggingface/datasets/assets/50714796/7a6604d2-899d-4102-a008-1a28c90698f1) ![image](https://github.com/huggingface/datasets/assets/50714796/eba458c7-d3af-4868-a520-8b683aa96f66) ![image](https://github.com/huggingface/datasets/assets/50714796/9f394891-5f5b-46f7-8e52-071b724aedab) **Example 2:** ![image](https://github.com/huggingface/datasets/assets/50714796/a2b2a8d3-124e-4990-b64a-5133cf4be2fe) ![image](https://github.com/huggingface/datasets/assets/50714796/6ee25642-35aa-40ad-ac1e-899d33be90df) ![image](https://github.com/huggingface/datasets/assets/50714796/5e42ff91-9fc4-4520-8803-0e225656f96c) It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar). This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated. ### Steps to reproduce the bug ```python import matplotlib.pyplot as plt from datasets import load_dataset # Define functions to convert bounding box formats def convert_format1(box): x, y, w, h = box x2, y2 = x + w, y + h return [x, y, x2, y2] def convert_format2(box): x1, y1, x2, y2 = box return [x1, y1, x2, y2] def plot_cropped_image(image, box, title): cropped_image = image.crop(box) plt.imshow(cropped_image) plt.title(title) plt.axis('off') plt.savefig(title+'.png') plt.show() doc_index = 1 word_index = 3 dataset = load_dataset("Theivaprakasham/wildreceipt")['train'] bbox_hugging_face = dataset[doc_index]['bboxes'][word_index] text_unit_face = dataset[doc_index]['words'][word_index] common_box_hugface_1 = convert_format1(bbox_hugging_face) common_box_hugface_2 = convert_format2(bbox_hugging_face) plot_cropped_image(image_hugging, common_box_hugface_1, f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}') plot_cropped_image(image_hugging, common_box_hugface_2, f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}') ``` ### Expected behavior The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset. ### Environment info - Python version: 3.8 - Hugging Face datasets version: 2.14.2 - Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6123/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6122/comments
https://api.github.com/repos/huggingface/datasets/issues/6122/events
https://github.com/huggingface/datasets/issues/6122
1,837,335,721
I_kwDODunzps5tg4Sp
6,122
Upload README via `push_to_hub`
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "You can use `huggingface_hub`'s [Card API](https://huggingface.co/docs/huggingface_hub/package_reference/cards) to programmatically push a dataset card to the Hub." ]
2023-08-04T21:00:27
2023-08-21T18:18:54
2023-08-21T18:18:54
NONE
null
null
null
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( "---\n" + str(dataset_card_data) + "\n---\n" + f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)' ) if dataset_card is None else dataset_card ) HfApi(endpoint=config.HF_ENDPOINT).upload_file( path_or_fileobj=str(dataset_card).encode(), path_in_repo="README.md", repo_id=repo_id, token=token, repo_type="dataset", revision=branch, ) ``` So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation. ### Motivation as elabrated above. ### Your contribution I might be able to make a pr.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6122/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6121/comments
https://api.github.com/repos/huggingface/datasets/issues/6121/events
https://github.com/huggingface/datasets/pull/6121
1,836,761,712
PR_kwDODunzps5XMsWd
6,121
Small typo in the code example of create imagefolder dataset
{ "login": "WangXin93", "id": 19688994, "node_id": "MDQ6VXNlcjE5Njg4OTk0", "avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangXin93", "html_url": "https://github.com/WangXin93", "followers_url": "https://api.github.com/users/WangXin93/followers", "following_url": "https://api.github.com/users/WangXin93/following{/other_user}", "gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions", "organizations_url": "https://api.github.com/users/WangXin93/orgs", "repos_url": "https://api.github.com/users/WangXin93/repos", "events_url": "https://api.github.com/users/WangXin93/events{/privacy}", "received_events_url": "https://api.github.com/users/WangXin93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin" ]
2023-08-04T13:36:59
2023-08-04T13:45:32
2023-08-04T13:41:43
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6121", "html_url": "https://github.com/huggingface/datasets/pull/6121", "diff_url": "https://github.com/huggingface/datasets/pull/6121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6121.patch", "merged_at": null }
Fix type of code example of load imagefolder dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6121/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6120/comments
https://api.github.com/repos/huggingface/datasets/issues/6120/events
https://github.com/huggingface/datasets/issues/6120
1,836,026,938
I_kwDODunzps5tb4w6
6,120
Lookahead streaming support?
{ "login": "PicoCreator", "id": 17175484, "node_id": "MDQ6VXNlcjE3MTc1NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PicoCreator", "html_url": "https://github.com/PicoCreator", "followers_url": "https://api.github.com/users/PicoCreator/followers", "following_url": "https://api.github.com/users/PicoCreator/following{/other_user}", "gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}", "starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions", "organizations_url": "https://api.github.com/users/PicoCreator/orgs", "repos_url": "https://api.github.com/users/PicoCreator/repos", "events_url": "https://api.github.com/users/PicoCreator/events{/privacy}", "received_events_url": "https://api.github.com/users/PicoCreator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "In which format is your dataset? We could expose the `pre_buffer` flag for Parquet to use PyArrow's background thread pool to speed up loading. " ]
2023-08-04T04:01:52
2023-08-17T17:48:42
null
NONE
null
null
null
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mapping instruction/tokenizer specific) Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained. With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches. ### Motivation Faster streaming performance, while training over extra large TB sized datasets ### Your contribution I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6120/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6119/comments
https://api.github.com/repos/huggingface/datasets/issues/6119/events
https://github.com/huggingface/datasets/pull/6119
1,835,996,350
PR_kwDODunzps5XKI19
6,119
[Docs] Add description of `select_columns` to guide
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007755 / 0.011353 (-0.003598) | 0.004618 / 0.011008 (-0.006391) | 0.098132 / 0.038508 (0.059624) | 0.086759 / 0.023109 (0.063650) | 0.374668 / 0.275898 (0.098770) | 0.417131 / 0.323480 (0.093651) | 0.004604 / 0.007986 (-0.003382) | 0.005461 / 0.004328 (0.001132) | 0.077249 / 0.004250 (0.072999) | 0.063247 / 0.037052 (0.026195) | 0.391801 / 0.258489 (0.133312) | 0.432139 / 0.293841 (0.138298) | 0.036755 / 0.128546 (-0.091791) | 0.010011 / 0.075646 (-0.065636) | 0.346175 / 0.419271 (-0.073097) | 0.061503 / 0.043533 (0.017971) | 0.374063 / 0.255139 (0.118924) | 0.435873 / 0.283200 (0.152673) | 0.029476 / 0.141683 (-0.112207) | 1.786945 / 1.452155 (0.334790) | 1.857190 / 1.492716 (0.364474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253939 / 0.018006 (0.235933) | 0.506847 / 0.000490 (0.506358) | 0.007278 / 0.000200 (0.007079) | 0.000451 / 0.000054 (0.000397) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032938 / 0.037411 (-0.004474) | 0.097493 / 0.014526 (0.082967) | 0.112090 / 0.176557 (-0.064467) | 0.177986 / 0.737135 (-0.559149) | 0.112060 / 0.296338 (-0.184278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.481858 / 0.215209 (0.266649) | 4.814894 / 2.077655 (2.737239) | 2.496428 / 1.504120 (0.992308) | 2.309965 / 1.541195 (0.768770) | 2.393819 / 1.468490 (0.925329) | 0.564670 / 4.584777 (-4.020107) | 4.151222 / 3.745712 (0.405510) | 3.676115 / 5.269862 (-1.593747) | 2.346165 / 4.565676 (-2.219512) | 0.066344 / 0.424275 (-0.357931) | 0.009006 / 0.007607 (0.001399) | 0.567699 / 0.226044 (0.341654) | 5.686799 / 2.268929 (3.417871) | 3.031044 / 55.444624 (-52.413580) | 2.606259 / 6.876477 (-4.270217) | 2.864876 / 2.142072 (0.722804) | 0.681730 / 4.805227 (-4.123498) | 0.155405 / 6.500664 (-6.345259) | 0.071492 / 0.075469 (-0.003977) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.514446 / 1.841788 (-0.327341) | 22.624912 / 8.074308 (14.550604) | 16.754145 / 10.191392 (6.562753) | 0.193113 / 0.680424 (-0.487311) | 0.021808 / 0.534201 (-0.512393) | 0.468241 / 0.579283 (-0.111042) | 0.499647 / 0.434364 (0.065283) | 0.539571 / 0.540337 (-0.000766) | 0.771268 / 1.386936 (-0.615668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007562 / 0.011353 (-0.003791) | 0.004548 / 0.011008 (-0.006460) | 0.075998 / 0.038508 (0.037490) | 0.081648 / 0.023109 (0.058539) | 0.462876 / 0.275898 (0.186978) | 0.499366 / 0.323480 (0.175886) | 0.005839 / 0.007986 (-0.002147) | 0.003753 / 0.004328 (-0.000576) | 0.075918 / 0.004250 (0.071668) | 0.063233 / 0.037052 (0.026181) | 0.459024 / 0.258489 (0.200535) | 0.506388 / 0.293841 (0.212547) | 0.036179 / 0.128546 (-0.092367) | 0.009961 / 0.075646 (-0.065685) | 0.082061 / 0.419271 (-0.337211) | 0.056469 / 0.043533 (0.012936) | 0.459567 / 0.255139 (0.204428) | 0.482578 / 0.283200 (0.199378) | 0.026363 / 0.141683 (-0.115320) | 1.742247 / 1.452155 (0.290092) | 1.807166 / 1.492716 (0.314450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.330526 / 0.018006 (0.312520) | 0.511674 / 0.000490 (0.511184) | 0.040969 / 0.000200 (0.040769) | 0.000176 / 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035492 / 0.037411 (-0.001920) | 0.104338 / 0.014526 (0.089813) | 0.116973 / 0.176557 (-0.059583) | 0.180218 / 0.737135 (-0.556917) | 0.118801 / 0.296338 (-0.177538) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492196 / 0.215209 (0.276987) | 4.910271 / 2.077655 (2.832616) | 2.542562 / 1.504120 (1.038442) | 2.333516 / 1.541195 (0.792321) | 2.439682 / 1.468490 (0.971192) | 0.571966 / 4.584777 (-4.012811) | 4.089801 / 3.745712 (0.344089) | 3.732129 / 5.269862 (-1.537733) | 2.375887 / 4.565676 (-2.189789) | 0.067376 / 0.424275 (-0.356900) | 0.008350 / 0.007607 (0.000743) | 0.583942 / 0.226044 (0.357897) | 5.840002 / 2.268929 (3.571074) | 3.062520 / 55.444624 (-52.382104) | 2.722512 / 6.876477 (-4.153965) | 2.938307 / 2.142072 (0.796234) | 0.689459 / 4.805227 (-4.115769) | 0.155632 / 6.500664 (-6.345032) | 0.072387 / 0.075469 (-0.003082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595587 / 1.841788 (-0.246201) | 23.035478 / 8.074308 (14.961170) | 16.457675 / 10.191392 (6.266283) | 0.170819 / 0.680424 (-0.509605) | 0.022042 / 0.534201 (-0.512159) | 0.466824 / 0.579283 (-0.112459) | 0.486350 / 0.434364 (0.051986) | 0.574330 / 0.540337 (0.033993) | 0.764913 / 1.386936 (-0.622023) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#664a1cb72ea1e6ef7c47e671e2686ca4a35e8d63 \"CML watermark\")\n" ]
2023-08-04T03:13:30
2023-08-16T10:13:02
2023-08-16T10:02:52
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6119", "html_url": "https://github.com/huggingface/datasets/pull/6119", "diff_url": "https://github.com/huggingface/datasets/pull/6119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6119.patch", "merged_at": "2023-08-16T10:02:52" }
Closes #6116
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6119/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6118/comments
https://api.github.com/repos/huggingface/datasets/issues/6118/events
https://github.com/huggingface/datasets/issues/6118
1,835,940,417
I_kwDODunzps5tbjpB
6,118
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
{ "login": "finkga", "id": 1281051, "node_id": "MDQ6VXNlcjEyODEwNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finkga", "html_url": "https://github.com/finkga", "followers_url": "https://api.github.com/users/finkga/followers", "following_url": "https://api.github.com/users/finkga/following{/other_user}", "gists_url": "https://api.github.com/users/finkga/gists{/gist_id}", "starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finkga/subscriptions", "organizations_url": "https://api.github.com/users/finkga/orgs", "repos_url": "https://api.github.com/users/finkga/repos", "events_url": "https://api.github.com/users/finkga/events{/privacy}", "received_events_url": "https://api.github.com/users/finkga/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! `IterableDataset.from_generator` expects a generator function, not the object (to be consistent with `Dataset.from_generator`).\r\n\r\nYou can fix the above snippet as follows:\r\n```python\r\ntrain_dataset = IterableDataset.from_generator(line_generator, fn_kwargs={\"files\": model_training_files})\r\n```" ]
2023-08-04T01:45:04
2023-08-17T17:58:27
null
NONE
null
null
null
### Describe the bug **Description** Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator. **Code example** ``` def line_generator(files: List[Path]): if isinstance(files, str): files = [Path(files)] for file in files: if isinstance(file, str): file = Path(file) yield from open(file,'r').readlines() ... model_training_files = ['file1.txt', 'file2.txt', 'file3.txt'] train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files)) ``` **Traceback** Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields yield File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps dump(obj, file) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump Pickler(file, recurse=True).dump(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump StockPickler.dump(self, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump self.save(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems save(v) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle 'generator' object ### Steps to reproduce the bug 1. Create a set of text files to iterate over. 2. Create a generator that returns the lines in each file until all files are exhausted. 3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator(). 4. Wait for the explosion. ### Expected behavior I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function. ### Environment info datasets.__version__ == '2.13.1' Python 3.9.6 Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6118/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
2023-08-03T14:46:04
2023-08-03T14:56:59
2023-08-03T14:46:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117", "html_url": "https://github.com/huggingface/datasets/pull/6117", "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "merged_at": "2023-08-03T14:46:18" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
https://api.github.com/repos/huggingface/datasets/issues/6116/events
https://github.com/huggingface/datasets/issues/6116
1,835,098,484
I_kwDODunzps5tYWF0
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Great idea, feel free to open a PR! :)" ]
2023-08-03T13:45:10
2023-08-16T10:02:53
2023-08-16T10:02:53
CONTRIBUTOR
null
null
null
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide. ### Motivation This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480. Mentioning it in the guide would help future users discover this added feature. ### Your contribution I could submit a PR to add a brief description of the function to said guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6115/comments
https://api.github.com/repos/huggingface/datasets/issues/6115/events
https://github.com/huggingface/datasets/pull/6115
1,834,765,485
PR_kwDODunzps5XGChP
6,115
Release: 2.14.3
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007578 / 0.011353 (-0.003775) | 0.004271 / 0.011008 (-0.006738) | 0.086607 / 0.038508 (0.048098) | 0.063209 / 0.023109 (0.040099) | 0.351724 / 0.275898 (0.075826) | 0.399261 / 0.323480 (0.075781) | 0.004767 / 0.007986 (-0.003219) | 0.003487 / 0.004328 (-0.000842) | 0.071483 / 0.004250 (0.067233) | 0.051281 / 0.037052 (0.014229) | 0.387726 / 0.258489 (0.129237) | 0.408446 / 0.293841 (0.114605) | 0.041189 / 0.128546 (-0.087357) | 0.012446 / 0.075646 (-0.063200) | 0.331147 / 0.419271 (-0.088124) | 0.056721 / 0.043533 (0.013188) | 0.361306 / 0.255139 (0.106167) | 0.409651 / 0.283200 (0.126451) | 0.035485 / 0.141683 (-0.106198) | 1.461391 / 1.452155 (0.009236) | 1.554820 / 1.492716 (0.062104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237119 / 0.018006 (0.219113) | 0.518731 / 0.000490 (0.518241) | 0.004192 / 0.000200 (0.003992) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024912 / 0.037411 (-0.012499) | 0.089420 / 0.014526 (0.074894) | 0.091209 / 0.176557 (-0.085347) | 0.152580 / 0.737135 (-0.584555) | 0.089660 / 0.296338 (-0.206678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515223 / 0.215209 (0.300014) | 5.328359 / 2.077655 (3.250705) | 1.974326 / 1.504120 (0.470206) | 1.665216 / 1.541195 (0.124021) | 1.736040 / 1.468490 (0.267550) | 0.734746 / 4.584777 (-3.850031) | 4.186613 / 3.745712 (0.440901) | 3.535760 / 5.269862 (-1.734102) | 2.333247 / 4.565676 (-2.232429) | 0.071845 / 0.424275 (-0.352430) | 0.006147 / 0.007607 (-0.001460) | 0.546649 / 0.226044 (0.320605) | 5.452281 / 2.268929 (3.183353) | 2.512984 / 55.444624 (-52.931640) | 2.104210 / 6.876477 (-4.772267) | 2.409251 / 2.142072 (0.267178) | 0.822797 / 4.805227 (-3.982430) | 0.166648 / 6.500664 (-6.334016) | 0.056350 / 0.075469 (-0.019119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397798 / 1.841788 (-0.443989) | 20.549399 / 8.074308 (12.475091) | 19.118168 / 10.191392 (8.926776) | 0.216361 / 0.680424 (-0.464063) | 0.027064 / 0.534201 (-0.507136) | 0.410762 / 0.579283 (-0.168521) | 0.559225 / 0.434364 (0.124861) | 0.468028 / 0.540337 (-0.072309) | 0.691520 / 1.386936 (-0.695416) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003879 / 0.011008 (-0.007130) | 0.058723 / 0.038508 (0.020215) | 0.057202 / 0.023109 (0.034092) | 0.344397 / 0.275898 (0.068499) | 0.360388 / 0.323480 (0.036908) | 0.005502 / 0.007986 (-0.002483) | 0.004101 / 0.004328 (-0.000227) | 0.058168 / 0.004250 (0.053917) | 0.059112 / 0.037052 (0.022060) | 0.362206 / 0.258489 (0.103717) | 0.386444 / 0.293841 (0.092603) | 0.036613 / 0.128546 (-0.091934) | 0.010482 / 0.075646 (-0.065165) | 0.065850 / 0.419271 (-0.353421) | 0.046528 / 0.043533 (0.002995) | 0.349568 / 0.255139 (0.094429) | 0.360181 / 0.283200 (0.076981) | 0.029030 / 0.141683 (-0.112653) | 1.314569 / 1.452155 (-0.137586) | 1.422393 / 1.492716 (-0.070324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281554 / 0.018006 (0.263548) | 0.608018 / 0.000490 (0.607528) | 0.004568 / 0.000200 (0.004368) | 0.000182 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023515 / 0.037411 (-0.013896) | 0.072994 / 0.014526 (0.058468) | 0.080688 / 0.176557 (-0.095868) | 0.125904 / 0.737135 (-0.611232) | 0.085457 / 0.296338 (-0.210882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471530 / 0.215209 (0.256321) | 4.796197 / 2.077655 (2.718542) | 2.189181 / 1.504120 (0.685061) | 1.886649 / 1.541195 (0.345454) | 1.871067 / 1.468490 (0.402577) | 0.661043 / 4.584777 (-3.923734) | 4.344027 / 3.745712 (0.598315) | 3.656967 / 5.269862 (-1.612895) | 2.286033 / 4.565676 (-2.279644) | 0.079146 / 0.424275 (-0.345129) | 0.006840 / 0.007607 (-0.000767) | 0.588750 / 0.226044 (0.362706) | 6.301286 / 2.268929 (4.032357) | 3.074702 / 55.444624 (-52.369923) | 2.398739 / 6.876477 (-4.477738) | 2.555057 / 2.142072 (0.412985) | 0.874189 / 4.805227 (-3.931038) | 0.191423 / 6.500664 (-6.309241) | 0.061227 / 0.075469 (-0.014242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472763 / 1.841788 (-0.369024) | 19.441304 / 8.074308 (11.366996) | 15.974276 / 10.191392 (5.782884) | 0.172503 / 0.680424 (-0.507921) | 0.027016 / 0.534201 (-0.507185) | 0.356085 / 0.579283 (-0.223198) | 0.473251 / 0.434364 (0.038887) | 0.427949 / 0.540337 (-0.112388) | 0.588924 / 1.386936 (-0.798013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0973da6e60ac7c1d24229ba6aa6881747b21858a \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003558 / 0.011008 (-0.007450) | 0.080576 / 0.038508 (0.042068) | 0.066542 / 0.023109 (0.043432) | 0.323997 / 0.275898 (0.048099) | 0.369828 / 0.323480 (0.046348) | 0.004896 / 0.007986 (-0.003090) | 0.002909 / 0.004328 (-0.001419) | 0.062553 / 0.004250 (0.058302) | 0.049795 / 0.037052 (0.012742) | 0.321369 / 0.258489 (0.062880) | 0.422860 / 0.293841 (0.129019) | 0.027394 / 0.128546 (-0.101152) | 0.007954 / 0.075646 (-0.067693) | 0.264122 / 0.419271 (-0.155149) | 0.044881 / 0.043533 (0.001349) | 0.316702 / 0.255139 (0.061563) | 0.374718 / 0.283200 (0.091518) | 0.021728 / 0.141683 (-0.119955) | 1.394456 / 1.452155 (-0.057699) | 1.474936 / 1.492716 (-0.017780) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191902 / 0.018006 (0.173896) | 0.430468 / 0.000490 (0.429979) | 0.003790 / 0.000200 (0.003590) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024974 / 0.037411 (-0.012438) | 0.073053 / 0.014526 (0.058527) | 0.083801 / 0.176557 (-0.092756) | 0.143457 / 0.737135 (-0.593678) | 0.085099 / 0.296338 (-0.211240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428411 / 0.215209 (0.213202) | 4.278077 / 2.077655 (2.200422) | 2.230039 / 1.504120 (0.725919) | 2.057191 / 1.541195 (0.515996) | 2.120109 / 1.468490 (0.651619) | 0.495242 / 4.584777 (-4.089535) | 3.031299 / 3.745712 (-0.714413) | 2.802685 / 5.269862 (-2.467176) | 1.839828 / 4.565676 (-2.725849) | 0.056875 / 0.424275 (-0.367401) | 0.006446 / 0.007607 (-0.001161) | 0.498958 / 0.226044 (0.272913) | 4.980440 / 2.268929 (2.711511) | 2.659659 / 55.444624 (-52.784965) | 2.315174 / 6.876477 (-4.561303) | 2.475920 / 2.142072 (0.333848) | 0.586946 / 4.805227 (-4.218282) | 0.124291 / 6.500664 (-6.376373) | 0.060701 / 0.075469 (-0.014768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245062 / 1.841788 (-0.596725) | 18.201444 / 8.074308 (10.127136) | 13.723271 / 10.191392 (3.531879) | 0.130203 / 0.680424 (-0.550221) | 0.016773 / 0.534201 (-0.517428) | 0.332909 / 0.579283 (-0.246374) | 0.347469 / 0.434364 (-0.086895) | 0.381364 / 0.540337 (-0.158973) | 0.541723 / 1.386936 (-0.845213) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005934 / 0.011353 (-0.005419) | 0.003573 / 0.011008 (-0.007435) | 0.062195 / 0.038508 (0.023687) | 0.059026 / 0.023109 (0.035917) | 0.413993 / 0.275898 (0.138095) | 0.459552 / 0.323480 (0.136072) | 0.004610 / 0.007986 (-0.003376) | 0.002907 / 0.004328 (-0.001421) | 0.062983 / 0.004250 (0.058733) | 0.047797 / 0.037052 (0.010745) | 0.415461 / 0.258489 (0.156972) | 0.417424 / 0.293841 (0.123583) | 0.027098 / 0.128546 (-0.101449) | 0.008106 / 0.075646 (-0.067540) | 0.067600 / 0.419271 (-0.351672) | 0.041432 / 0.043533 (-0.002101) | 0.407861 / 0.255139 (0.152722) | 0.430774 / 0.283200 (0.147575) | 0.020738 / 0.141683 (-0.120945) | 1.435127 / 1.452155 (-0.017028) | 1.486961 / 1.492716 (-0.005755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231174 / 0.018006 (0.213168) | 0.421208 / 0.000490 (0.420718) | 0.005411 / 0.000200 (0.005211) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025362 / 0.037411 (-0.012049) | 0.078534 / 0.014526 (0.064008) | 0.085304 / 0.176557 (-0.091252) | 0.139048 / 0.737135 (-0.598087) | 0.087015 / 0.296338 (-0.209323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448506 / 0.215209 (0.233297) | 4.486694 / 2.077655 (2.409039) | 2.488022 / 1.504120 (0.983902) | 2.325321 / 1.541195 (0.784126) | 2.381311 / 1.468490 (0.912821) | 0.502102 / 4.584777 (-4.082675) | 3.018326 / 3.745712 (-0.727386) | 2.824922 / 5.269862 (-2.444940) | 1.857414 / 4.565676 (-2.708263) | 0.057514 / 0.424275 (-0.366761) | 0.006829 / 0.007607 (-0.000779) | 0.521939 / 0.226044 (0.295895) | 5.224393 / 2.268929 (2.955465) | 2.933132 / 55.444624 (-52.511492) | 2.661187 / 6.876477 (-4.215290) | 2.781950 / 2.142072 (0.639878) | 0.592927 / 4.805227 (-4.212300) | 0.126685 / 6.500664 (-6.373979) | 0.064188 / 0.075469 (-0.011281) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351107 / 1.841788 (-0.490681) | 18.344453 / 8.074308 (10.270145) | 13.838788 / 10.191392 (3.647396) | 0.157881 / 0.680424 (-0.522543) | 0.016636 / 0.534201 (-0.517565) | 0.331597 / 0.579283 (-0.247686) | 0.345573 / 0.434364 (-0.088791) | 0.397361 / 0.540337 (-0.142976) | 0.534289 / 1.386936 (-0.852647) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#582e722a76534904c0f3038d32ebb8db88ce9128 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006399 / 0.011353 (-0.004954) | 0.003872 / 0.011008 (-0.007136) | 0.083722 / 0.038508 (0.045214) | 0.068845 / 0.023109 (0.045736) | 0.329112 / 0.275898 (0.053214) | 0.343295 / 0.323480 (0.019815) | 0.005137 / 0.007986 (-0.002849) | 0.003303 / 0.004328 (-0.001026) | 0.064495 / 0.004250 (0.060245) | 0.051448 / 0.037052 (0.014395) | 0.322554 / 0.258489 (0.064065) | 0.361934 / 0.293841 (0.068093) | 0.030821 / 0.128546 (-0.097726) | 0.008482 / 0.075646 (-0.067164) | 0.288136 / 0.419271 (-0.131135) | 0.051935 / 0.043533 (0.008402) | 0.308283 / 0.255139 (0.053144) | 0.343421 / 0.283200 (0.060221) | 0.023639 / 0.141683 (-0.118044) | 1.485442 / 1.452155 (0.033288) | 1.533282 / 1.492716 (0.040565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218163 / 0.018006 (0.200157) | 0.464473 / 0.000490 (0.463983) | 0.003097 / 0.000200 (0.002897) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028650 / 0.037411 (-0.008761) | 0.083295 / 0.014526 (0.068769) | 0.096468 / 0.176557 (-0.080088) | 0.152086 / 0.737135 (-0.585050) | 0.102586 / 0.296338 (-0.193752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393038 / 0.215209 (0.177829) | 3.925514 / 2.077655 (1.847859) | 1.938419 / 1.504120 (0.434300) | 1.760265 / 1.541195 (0.219071) | 1.810024 / 1.468490 (0.341534) | 0.486232 / 4.584777 (-4.098545) | 3.618747 / 3.745712 (-0.126965) | 3.206950 / 5.269862 (-2.062912) | 1.999240 / 4.565676 (-2.566436) | 0.056986 / 0.424275 (-0.367289) | 0.007193 / 0.007607 (-0.000415) | 0.469313 / 0.226044 (0.243269) | 4.688670 / 2.268929 (2.419741) | 2.400332 / 55.444624 (-53.044292) | 2.074197 / 6.876477 (-4.802279) | 2.290823 / 2.142072 (0.148751) | 0.582339 / 4.805227 (-4.222888) | 0.134127 / 6.500664 (-6.366537) | 0.061061 / 0.075469 (-0.014408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272782 / 1.841788 (-0.569006) | 19.463375 / 8.074308 (11.389067) | 14.306819 / 10.191392 (4.115427) | 0.164608 / 0.680424 (-0.515816) | 0.018626 / 0.534201 (-0.515575) | 0.395225 / 0.579283 (-0.184058) | 0.408984 / 0.434364 (-0.025380) | 0.463364 / 0.540337 (-0.076974) | 0.630425 / 1.386936 (-0.756511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006465 / 0.011353 (-0.004888) | 0.003975 / 0.011008 (-0.007033) | 0.063643 / 0.038508 (0.025134) | 0.075214 / 0.023109 (0.052105) | 0.361734 / 0.275898 (0.085836) | 0.396664 / 0.323480 (0.073184) | 0.005251 / 0.007986 (-0.002735) | 0.003249 / 0.004328 (-0.001080) | 0.063841 / 0.004250 (0.059591) | 0.054504 / 0.037052 (0.017451) | 0.374791 / 0.258489 (0.116302) | 0.399205 / 0.293841 (0.105364) | 0.031355 / 0.128546 (-0.097192) | 0.008483 / 0.075646 (-0.067163) | 0.070234 / 0.419271 (-0.349037) | 0.048336 / 0.043533 (0.004803) | 0.373484 / 0.255139 (0.118345) | 0.382174 / 0.283200 (0.098974) | 0.022560 / 0.141683 (-0.119123) | 1.449799 / 1.452155 (-0.002355) | 1.525255 / 1.492716 (0.032539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228350 / 0.018006 (0.210343) | 0.444344 / 0.000490 (0.443855) | 0.003699 / 0.000200 (0.003499) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030681 / 0.037411 (-0.006731) | 0.087340 / 0.014526 (0.072814) | 0.098636 / 0.176557 (-0.077920) | 0.151665 / 0.737135 (-0.585471) | 0.100840 / 0.296338 (-0.195498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417857 / 0.215209 (0.202648) | 4.168407 / 2.077655 (2.090752) | 2.201758 / 1.504120 (0.697638) | 1.997834 / 1.541195 (0.456639) | 2.127693 / 1.468490 (0.659202) | 0.486429 / 4.584777 (-4.098348) | 3.676335 / 3.745712 (-0.069378) | 3.226268 / 5.269862 (-2.043594) | 2.027255 / 4.565676 (-2.538422) | 0.056759 / 0.424275 (-0.367516) | 0.007628 / 0.007607 (0.000021) | 0.500482 / 0.226044 (0.274438) | 4.996236 / 2.268929 (2.727307) | 2.628884 / 55.444624 (-52.815740) | 2.347611 / 6.876477 (-4.528866) | 2.551328 / 2.142072 (0.409255) | 0.582449 / 4.805227 (-4.222778) | 0.132844 / 6.500664 (-6.367821) | 0.061791 / 0.075469 (-0.013678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373718 / 1.841788 (-0.468070) | 19.921217 / 8.074308 (11.846909) | 14.209642 / 10.191392 (4.018250) | 0.185334 / 0.680424 (-0.495090) | 0.018228 / 0.534201 (-0.515973) | 0.395549 / 0.579283 (-0.183734) | 0.404446 / 0.434364 (-0.029918) | 0.472456 / 0.540337 (-0.067882) | 0.622739 / 1.386936 (-0.764197) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006007 / 0.011353 (-0.005346) | 0.003588 / 0.011008 (-0.007420) | 0.080334 / 0.038508 (0.041826) | 0.058932 / 0.023109 (0.035823) | 0.404613 / 0.275898 (0.128715) | 0.438377 / 0.323480 (0.114897) | 0.003468 / 0.007986 (-0.004518) | 0.003702 / 0.004328 (-0.000627) | 0.062936 / 0.004250 (0.058686) | 0.047987 / 0.037052 (0.010934) | 0.411409 / 0.258489 (0.152920) | 0.450244 / 0.293841 (0.156403) | 0.027007 / 0.128546 (-0.101539) | 0.007932 / 0.075646 (-0.067714) | 0.261390 / 0.419271 (-0.157882) | 0.044992 / 0.043533 (0.001459) | 0.409730 / 0.255139 (0.154591) | 0.433331 / 0.283200 (0.150131) | 0.020446 / 0.141683 (-0.121237) | 1.425418 / 1.452155 (-0.026736) | 1.479242 / 1.492716 (-0.013475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187375 / 0.018006 (0.169368) | 0.428532 / 0.000490 (0.428043) | 0.003406 / 0.000200 (0.003206) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024390 / 0.037411 (-0.013022) | 0.072571 / 0.014526 (0.058045) | 0.083513 / 0.176557 (-0.093044) | 0.144395 / 0.737135 (-0.592741) | 0.084813 / 0.296338 (-0.211526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409176 / 0.215209 (0.193967) | 4.078082 / 2.077655 (2.000428) | 1.913596 / 1.504120 (0.409476) | 1.718470 / 1.541195 (0.177275) | 1.753106 / 1.468490 (0.284616) | 0.494167 / 4.584777 (-4.090610) | 3.029531 / 3.745712 (-0.716181) | 2.807331 / 5.269862 (-2.462531) | 1.839471 / 4.565676 (-2.726206) | 0.057169 / 0.424275 (-0.367106) | 0.006433 / 0.007607 (-0.001175) | 0.482666 / 0.226044 (0.256621) | 4.817601 / 2.268929 (2.548673) | 2.449967 / 55.444624 (-52.994658) | 2.113891 / 6.876477 (-4.762586) | 2.399293 / 2.142072 (0.257221) | 0.578903 / 4.805227 (-4.226324) | 0.124306 / 6.500664 (-6.376358) | 0.061572 / 0.075469 (-0.013897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254692 / 1.841788 (-0.587096) | 18.414049 / 8.074308 (10.339741) | 13.992059 / 10.191392 (3.800667) | 0.146671 / 0.680424 (-0.533753) | 0.016925 / 0.534201 (-0.517275) | 0.333124 / 0.579283 (-0.246159) | 0.348007 / 0.434364 (-0.086357) | 0.378519 / 0.540337 (-0.161819) | 0.532540 / 1.386936 (-0.854396) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003614 / 0.011008 (-0.007394) | 0.061707 / 0.038508 (0.023199) | 0.062874 / 0.023109 (0.039765) | 0.364760 / 0.275898 (0.088862) | 0.398136 / 0.323480 (0.074656) | 0.005598 / 0.007986 (-0.002388) | 0.002836 / 0.004328 (-0.001493) | 0.061880 / 0.004250 (0.057630) | 0.048165 / 0.037052 (0.011113) | 0.372656 / 0.258489 (0.114167) | 0.403967 / 0.293841 (0.110126) | 0.027046 / 0.128546 (-0.101501) | 0.008091 / 0.075646 (-0.067555) | 0.066783 / 0.419271 (-0.352489) | 0.041186 / 0.043533 (-0.002347) | 0.376009 / 0.255139 (0.120870) | 0.391769 / 0.283200 (0.108569) | 0.021020 / 0.141683 (-0.120663) | 1.514593 / 1.452155 (0.062438) | 1.548506 / 1.492716 (0.055790) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237610 / 0.018006 (0.219604) | 0.434274 / 0.000490 (0.433784) | 0.009720 / 0.000200 (0.009520) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025605 / 0.037411 (-0.011807) | 0.078971 / 0.014526 (0.064445) | 0.088154 / 0.176557 (-0.088403) | 0.139112 / 0.737135 (-0.598023) | 0.088890 / 0.296338 (-0.207449) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420027 / 0.215209 (0.204818) | 4.189493 / 2.077655 (2.111838) | 2.143907 / 1.504120 (0.639787) | 1.967032 / 1.541195 (0.425837) | 2.011845 / 1.468490 (0.543355) | 0.496692 / 4.584777 (-4.088085) | 3.025456 / 3.745712 (-0.720256) | 2.828436 / 5.269862 (-2.441426) | 1.860673 / 4.565676 (-2.705003) | 0.057199 / 0.424275 (-0.367076) | 0.006770 / 0.007607 (-0.000838) | 0.491281 / 0.226044 (0.265236) | 4.918065 / 2.268929 (2.649136) | 2.593172 / 55.444624 (-52.851452) | 2.250750 / 6.876477 (-4.625727) | 2.406235 / 2.142072 (0.264162) | 0.588648 / 4.805227 (-4.216579) | 0.125635 / 6.500664 (-6.375029) | 0.061697 / 0.075469 (-0.013773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374065 / 1.841788 (-0.467722) | 18.439315 / 8.074308 (10.365007) | 14.031660 / 10.191392 (3.840268) | 0.153665 / 0.680424 (-0.526759) | 0.016980 / 0.534201 (-0.517221) | 0.331799 / 0.579283 (-0.247484) | 0.343201 / 0.434364 (-0.091163) | 0.392445 / 0.540337 (-0.147892) | 0.530387 / 1.386936 (-0.856549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008189 / 0.011353 (-0.003164) | 0.004598 / 0.011008 (-0.006410) | 0.102199 / 0.038508 (0.063691) | 0.077961 / 0.023109 (0.054852) | 0.364936 / 0.275898 (0.089038) | 0.402606 / 0.323480 (0.079126) | 0.005522 / 0.007986 (-0.002464) | 0.004007 / 0.004328 (-0.000322) | 0.071560 / 0.004250 (0.067310) | 0.055818 / 0.037052 (0.018765) | 0.378394 / 0.258489 (0.119905) | 0.428990 / 0.293841 (0.135149) | 0.043142 / 0.128546 (-0.085404) | 0.013254 / 0.075646 (-0.062392) | 0.331102 / 0.419271 (-0.088170) | 0.061407 / 0.043533 (0.017875) | 0.387397 / 0.255139 (0.132258) | 0.416062 / 0.283200 (0.132862) | 0.036330 / 0.141683 (-0.105353) | 1.735352 / 1.452155 (0.283198) | 1.773329 / 1.492716 (0.280613) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188587 / 0.018006 (0.170581) | 0.519506 / 0.000490 (0.519016) | 0.004702 / 0.000200 (0.004502) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027152 / 0.037411 (-0.010260) | 0.094296 / 0.014526 (0.079770) | 0.098155 / 0.176557 (-0.078402) | 0.162541 / 0.737135 (-0.574595) | 0.112092 / 0.296338 (-0.184246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.537555 / 0.215209 (0.322346) | 5.486821 / 2.077655 (3.409166) | 2.377127 / 1.504120 (0.873008) | 2.073205 / 1.541195 (0.532011) | 2.075130 / 1.468490 (0.606640) | 0.783779 / 4.584777 (-3.800998) | 5.029524 / 3.745712 (1.283812) | 4.382724 / 5.269862 (-0.887138) | 2.836180 / 4.565676 (-1.729496) | 0.108840 / 0.424275 (-0.315435) | 0.008123 / 0.007607 (0.000516) | 0.673460 / 0.226044 (0.447416) | 6.674030 / 2.268929 (4.405102) | 3.208922 / 55.444624 (-52.235702) | 2.464908 / 6.876477 (-4.411568) | 2.661929 / 2.142072 (0.519856) | 0.962529 / 4.805227 (-3.842698) | 0.197974 / 6.500664 (-6.302690) | 0.066656 / 0.075469 (-0.008813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430373 / 1.841788 (-0.411415) | 21.180540 / 8.074308 (13.106232) | 19.027491 / 10.191392 (8.836099) | 0.217520 / 0.680424 (-0.462904) | 0.028038 / 0.534201 (-0.506163) | 0.435266 / 0.579283 (-0.144017) | 0.529510 / 0.434364 (0.095147) | 0.511011 / 0.540337 (-0.029327) | 0.728940 / 1.386936 (-0.657996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007883 / 0.011353 (-0.003470) | 0.004448 / 0.011008 (-0.006560) | 0.071350 / 0.038508 (0.032842) | 0.075269 / 0.023109 (0.052160) | 0.396705 / 0.275898 (0.120807) | 0.457809 / 0.323480 (0.134329) | 0.005193 / 0.007986 (-0.002792) | 0.003695 / 0.004328 (-0.000633) | 0.078087 / 0.004250 (0.073836) | 0.054276 / 0.037052 (0.017224) | 0.412184 / 0.258489 (0.153695) | 0.452400 / 0.293841 (0.158559) | 0.049762 / 0.128546 (-0.078784) | 0.013206 / 0.075646 (-0.062440) | 0.085985 / 0.419271 (-0.333287) | 0.058837 / 0.043533 (0.015304) | 0.432481 / 0.255139 (0.177342) | 0.433260 / 0.283200 (0.150060) | 0.031190 / 0.141683 (-0.110493) | 1.582707 / 1.452155 (0.130552) | 1.664457 / 1.492716 (0.171741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223639 / 0.018006 (0.205633) | 0.524388 / 0.000490 (0.523899) | 0.005489 / 0.000200 (0.005289) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030182 / 0.037411 (-0.007230) | 0.089309 / 0.014526 (0.074783) | 0.103306 / 0.176557 (-0.073250) | 0.162624 / 0.737135 (-0.574511) | 0.108957 / 0.296338 (-0.187381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577423 / 0.215209 (0.362214) | 5.900154 / 2.077655 (3.822500) | 2.687369 / 1.504120 (1.183249) | 2.513061 / 1.541195 (0.971866) | 2.506453 / 1.468490 (1.037963) | 0.830838 / 4.584777 (-3.753939) | 5.032195 / 3.745712 (1.286483) | 4.396827 / 5.269862 (-0.873035) | 2.884230 / 4.565676 (-1.681447) | 0.102239 / 0.424275 (-0.322036) | 0.008178 / 0.007607 (0.000571) | 0.710027 / 0.226044 (0.483983) | 7.149626 / 2.268929 (4.880698) | 3.403605 / 55.444624 (-52.041019) | 2.661970 / 6.876477 (-4.214506) | 2.760227 / 2.142072 (0.618154) | 1.043981 / 4.805227 (-3.761246) | 0.195028 / 6.500664 (-6.305636) | 0.065211 / 0.075469 (-0.010258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581265 / 1.841788 (-0.260522) | 21.640230 / 8.074308 (13.565922) | 19.031860 / 10.191392 (8.840468) | 0.196903 / 0.680424 (-0.483520) | 0.027061 / 0.534201 (-0.507140) | 0.444995 / 0.579283 (-0.134288) | 0.528195 / 0.434364 (0.093831) | 0.521540 / 0.540337 (-0.018797) | 0.730204 / 1.386936 (-0.656732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n" ]
2023-08-03T10:18:32
2023-08-03T15:08:02
2023-08-03T10:24:57
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6115", "html_url": "https://github.com/huggingface/datasets/pull/6115", "diff_url": "https://github.com/huggingface/datasets/pull/6115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6115.patch", "merged_at": "2023-08-03T10:24:57" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6115/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6114/comments
https://api.github.com/repos/huggingface/datasets/issues/6114/events
https://github.com/huggingface/datasets/issues/6114
1,834,015,584
I_kwDODunzps5tUNtg
6,114
Cache not being used when loading commonvoice 8.0.0
{ "login": "clabornd", "id": 31082141, "node_id": "MDQ6VXNlcjMxMDgyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clabornd", "html_url": "https://github.com/clabornd", "followers_url": "https://api.github.com/users/clabornd/followers", "following_url": "https://api.github.com/users/clabornd/following{/other_user}", "gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}", "starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clabornd/subscriptions", "organizations_url": "https://api.github.com/users/clabornd/orgs", "repos_url": "https://api.github.com/users/clabornd/repos", "events_url": "https://api.github.com/users/clabornd/events{/privacy}", "received_events_url": "https://api.github.com/users/clabornd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can avoid this by using the `revision` parameter in `load_dataset` to always force downloading a specific commit (if not specified it defaults to HEAD, hence the redownload).", "Thanks @mariosasko this works well, looks like I should have read the documentation a bit more carefully. \r\n\r\nIt is still a bit confusing which hash I should provide: passing `revision = c8fd66e85f086e3abb11eeee55b1737a3d1e8487` from https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/commits/main caused the cached version at `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a` to be loaded, so I had to know that it was the previous commit unless I've missed something else." ]
2023-08-02T23:18:11
2023-08-18T23:59:00
2023-08-18T23:59:00
NONE
null
null
null
### Describe the bug I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially: ``` dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>") ``` it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32` ### Steps to reproduce the bug Steps to reproduce the behavior: 1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` 2. dataset is updated by maintainers 3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` ### Expected behavior I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded? EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example: ``` load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en") > ... > File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str) 1937 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ... 1794 e = e.__context__ -> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info datasets==2.7.0 python==3.10.8 OS: AWS Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6114/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6113/comments
https://api.github.com/repos/huggingface/datasets/issues/6113/events
https://github.com/huggingface/datasets/issues/6113
1,833,854,030
I_kwDODunzps5tTmRO
6,113
load_dataset() fails with streamlit caching inside docker
{ "login": "fierval", "id": 987574, "node_id": "MDQ6VXNlcjk4NzU3NA==", "avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fierval", "html_url": "https://github.com/fierval", "followers_url": "https://api.github.com/users/fierval/followers", "following_url": "https://api.github.com/users/fierval/following{/other_user}", "gists_url": "https://api.github.com/users/fierval/gists{/gist_id}", "starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fierval/subscriptions", "organizations_url": "https://api.github.com/users/fierval/orgs", "repos_url": "https://api.github.com/users/fierval/repos", "events_url": "https://api.github.com/users/fierval/events{/privacy}", "received_events_url": "https://api.github.com/users/fierval/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! This should be fixed in the latest (patch) release (run `pip install -U datasets` to install it). This behavior was due to a bug in our authentication logic." ]
2023-08-02T20:20:26
2023-08-21T18:18:27
2023-08-21T18:18:27
NONE
null
null
null
### Describe the bug When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message: EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files Traceback: File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/home/user/app/app.py", line 62, in <module> dashboard() File "/home/user/app/app.py", line 47, in dashboard feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper return cached_func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__ return self._get_or_create_cached_value(args, kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) File "/home/user/app/hf_interface.py", line 16, in load_data hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None ### Steps to reproduce the bug ```python @st.cache_resource def load_data(repo_id: str, hf_token=None): """Load data from HuggingFace Hub """ hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"]) return hf_dataset ``` ### Expected behavior Expect to load. Note: works fine with datasets==2.13.1 ### Environment info datasets==2.14.2, Ubuntu bionic-based Docker container.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6113/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6112/comments
https://api.github.com/repos/huggingface/datasets/issues/6112/events
https://github.com/huggingface/datasets/issues/6112
1,833,693,299
I_kwDODunzps5tS_Bz
6,112
yaml error using push_to_hub with generated README.md
{ "login": "kevintee", "id": 1643887, "node_id": "MDQ6VXNlcjE2NDM4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevintee", "html_url": "https://github.com/kevintee", "followers_url": "https://api.github.com/users/kevintee/followers", "following_url": "https://api.github.com/users/kevintee/following{/other_user}", "gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevintee/subscriptions", "organizations_url": "https://api.github.com/users/kevintee/orgs", "repos_url": "https://api.github.com/users/kevintee/repos", "events_url": "https://api.github.com/users/kevintee/events{/privacy}", "received_events_url": "https://api.github.com/users/kevintee/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon." ]
2023-08-02T18:21:21
2023-08-17T16:53:24
null
NONE
null
null
null
### Describe the bug When I construct a dataset with the following features: ``` features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) ``` and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error: ``` Traceback (most recent call last): File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/looppayments/multitask_document_classification_dataset/commit/main The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module> build_dataset() File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset push_to_hub(dataset, "multitask_document_classification_dataset") File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub dataset.push_to_hub(f"looppayments/{dataset_name}", private=True) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub HfApi(endpoint=config.HF_ENDPOINT).upload_file( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit hf_raise_for_status(commit_resp, endpoint_name="commit") File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e) Bad request for commit endpoint: Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9) 7 | - 3 8 | - 224 9 | - 224 10 | dtype: float64 --------------^ 11 | - name: input_ids 12 | sequence: int64 ``` My guess is that the auto-generated yaml is unable to be parsed for some reason. ### Steps to reproduce the bug The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet: ``` from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value from PIL import Image from transformers import AutoProcessor features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) def preprocess_dataset(rows): # Get images images = [ Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"] ] encoding = processor( images, rows["tokens"], boxes=rows["bbox"], truncation=True, padding="max_length", ) encoding["tokens"] = rows["tokens"] return encoding dataset = dataset.map( preprocess_dataset, batched=True, batch_size=5, features=features, ) ``` ### Expected behavior Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error. ### Environment info - `datasets` version: 2.14.2 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6112/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6111/comments
https://api.github.com/repos/huggingface/datasets/issues/6111/events
https://github.com/huggingface/datasets/issues/6111
1,832,781,654
I_kwDODunzps5tPgdW
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
{ "login": "2catycm", "id": 41530341, "node_id": "MDQ6VXNlcjQxNTMwMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2catycm", "html_url": "https://github.com/2catycm", "followers_url": "https://api.github.com/users/2catycm/followers", "following_url": "https://api.github.com/users/2catycm/following{/other_user}", "gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}", "starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2catycm/subscriptions", "organizations_url": "https://api.github.com/users/2catycm/orgs", "repos_url": "https://api.github.com/users/2catycm/repos", "events_url": "https://api.github.com/users/2catycm/events{/privacy}", "received_events_url": "https://api.github.com/users/2catycm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "any idea?", "This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n\r\n`load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`", "> This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n> \r\n> `load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`\r\n\r\nThanks for your help. This works." ]
2023-08-02T09:17:29
2023-08-29T02:00:28
2023-08-29T02:00:28
NONE
null
null
null
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. ### Steps to reproduce the bug Steps to reproduce the bug: 1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main 2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box: ```bash cd my_directory_absolute git lfs install git clone https://huggingface.co/datasets/cifar100 ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. ``` 3. Write A python file to try to load the dataset ```python from datasets import load_dataset, load_from_disk dataset = load_from_disk("my_directory_absolute/cifar100") ``` Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead. 4. Then you will see the error reported: ```log --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[5], line 9 1 from datasets import load_dataset, load_from_disk ----> 9 dataset = load_from_disk("my_directory_absolute/cifar100") File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options) 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) 2231 else: -> 2232 raise FileNotFoundError( 2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." 2234 ) FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory. ``` ### Expected behavior The dataset should be load successfully. ### Environment info ```bash datasets-cli env ``` -> results: ```txt Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.2 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6111/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
https://api.github.com/repos/huggingface/datasets/issues/6110/events
https://github.com/huggingface/datasets/issues/6110
1,831,110,633
I_kwDODunzps5tJIfp
6,110
[BUG] Dataset initialized from in-memory data does not create cache.
{ "login": "MattYoon", "id": 57797966, "node_id": "MDQ6VXNlcjU3Nzk3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MattYoon", "html_url": "https://github.com/MattYoon", "followers_url": "https://api.github.com/users/MattYoon/followers", "following_url": "https://api.github.com/users/MattYoon/following{/other_user}", "gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions", "organizations_url": "https://api.github.com/users/MattYoon/orgs", "repos_url": "https://api.github.com/users/MattYoon/repos", "events_url": "https://api.github.com/users/MattYoon/events{/privacy}", "received_events_url": "https://api.github.com/users/MattYoon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached." ]
2023-08-01T11:58:58
2023-08-17T14:03:01
2023-08-17T14:03:00
NONE
null
null
null
### Describe the bug `Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`. ### Steps to reproduce the bug ```python # below code was run the second time so the map function can be loaded from cache if exists from datasets import load_dataset, Dataset dataset = load_dataset("tatsu-lab/alpaca")['train'] dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map print(len(dataset.cache_files)) # 1 # copy the exact same data but initialize from a dictionary memory_dataset = Dataset.from_dict({ 'instruction': dataset['instruction'], 'input': dataset['input'], 'output': dataset['output'], 'text': dataset['text']}) memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map print(len(memory_dataset.cache_files)) # Map: 100%|██████████| 52002[/52002] # 0 ``` ### Expected behavior The `map` function should create cache regardless of the method the `Dataset` was created. ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6109/comments
https://api.github.com/repos/huggingface/datasets/issues/6109/events
https://github.com/huggingface/datasets/issues/6109
1,830,753,793
I_kwDODunzps5tHxYB
6,109
Problems in downloading Amazon reviews from HF
{ "login": "610v4nn1", "id": 52964960, "node_id": "MDQ6VXNlcjUyOTY0OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/610v4nn1", "html_url": "https://github.com/610v4nn1", "followers_url": "https://api.github.com/users/610v4nn1/followers", "following_url": "https://api.github.com/users/610v4nn1/following{/other_user}", "gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}", "starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions", "organizations_url": "https://api.github.com/users/610v4nn1/orgs", "repos_url": "https://api.github.com/users/610v4nn1/repos", "events_url": "https://api.github.com/users/610v4nn1/events{/privacy}", "received_events_url": "https://api.github.com/users/610v4nn1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\nSee: https://huggingface.co/datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0 " ]
2023-08-01T08:38:29
2023-08-02T07:12:07
2023-08-02T07:12:07
NONE
null
null
null
### Describe the bug I have a script downloading `amazon_reviews_multi`. When the download starts, I get ``` Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.43MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 928kB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.81MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s] Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s] ``` the file is clearly too small to contain the requested dataset, in fact it contains en error message: ``` <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error> ``` obviously the script fails: ``` > raise DatasetGenerationError("An error occurred while generating the dataset") from e E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE") ### Expected behavior I would expect the dataset to be downloaded and processed ### Environment info * The problem is present with both datasets 2.12.0 and 2.14.2 * python version 3.10.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6109/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/6108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
https://api.github.com/repos/huggingface/datasets/issues/6108/events
https://github.com/huggingface/datasets/issues/6108
1,830,347,187
I_kwDODunzps5tGOGz
6,108
Loading local datasets got strangely stuck
{ "login": "LoveCatc", "id": 48412571, "node_id": "MDQ6VXNlcjQ4NDEyNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LoveCatc", "html_url": "https://github.com/LoveCatc", "followers_url": "https://api.github.com/users/LoveCatc/followers", "following_url": "https://api.github.com/users/LoveCatc/following{/other_user}", "gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}", "starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions", "organizations_url": "https://api.github.com/users/LoveCatc/orgs", "repos_url": "https://api.github.com/users/LoveCatc/repos", "events_url": "https://api.github.com/users/LoveCatc/events{/privacy}", "received_events_url": "https://api.github.com/users/LoveCatc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.", "I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G.", "We use a generic multiprocessing code, so there is little we can do about this - unfortunately, turning off multiprocessing seems to be the only solution. Multithreading would make our code easier to maintain and (most likely) avoid issues such as this one, but we cannot use it until the GIL is dropped (no-GIL Python should be released in 2024, so we can start exploring this then)" ]
2023-08-01T02:28:06
2023-08-17T17:36:45
null
NONE
null
null
null
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked. - `datasets` version: 2.14.2 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6107/comments
https://api.github.com/repos/huggingface/datasets/issues/6107/events
https://github.com/huggingface/datasets/pull/6107
1,829,625,320
PR_kwDODunzps5W0rLR
6,107
Fix deprecation of use_auth_token in file_utils
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007678 / 0.011353 (-0.003675) | 0.004233 / 0.011008 (-0.006776) | 0.095934 / 0.038508 (0.057426) | 0.064201 / 0.023109 (0.041092) | 0.345765 / 0.275898 (0.069867) | 0.383089 / 0.323480 (0.059609) | 0.004084 / 0.007986 (-0.003902) | 0.003311 / 0.004328 (-0.001017) | 0.072367 / 0.004250 (0.068117) | 0.048252 / 0.037052 (0.011200) | 0.338340 / 0.258489 (0.079851) | 0.391627 / 0.293841 (0.097786) | 0.045203 / 0.128546 (-0.083343) | 0.013494 / 0.075646 (-0.062153) | 0.314097 / 0.419271 (-0.105174) | 0.058183 / 0.043533 (0.014650) | 0.353946 / 0.255139 (0.098807) | 0.385181 / 0.283200 (0.101981) | 0.033111 / 0.141683 (-0.108572) | 1.578489 / 1.452155 (0.126335) | 1.631660 / 1.492716 (0.138944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202592 / 0.018006 (0.184586) | 0.506450 / 0.000490 (0.505961) | 0.004630 / 0.000200 (0.004430) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024761 / 0.037411 (-0.012651) | 0.086295 / 0.014526 (0.071769) | 0.094063 / 0.176557 (-0.082494) | 0.154189 / 0.737135 (-0.582947) | 0.096273 / 0.296338 (-0.200065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581731 / 0.215209 (0.366522) | 5.552020 / 2.077655 (3.474365) | 2.430800 / 1.504120 (0.926680) | 2.130864 / 1.541195 (0.589669) | 2.092802 / 1.468490 (0.624312) | 0.833956 / 4.584777 (-3.750821) | 4.840859 / 3.745712 (1.095147) | 4.267812 / 5.269862 (-1.002050) | 2.663245 / 4.565676 (-1.902432) | 0.093195 / 0.424275 (-0.331080) | 0.007942 / 0.007607 (0.000335) | 0.651457 / 0.226044 (0.425413) | 6.782986 / 2.268929 (4.514058) | 3.103307 / 55.444624 (-52.341318) | 2.373933 / 6.876477 (-4.502544) | 2.571613 / 2.142072 (0.429540) | 0.981389 / 4.805227 (-3.823839) | 0.199019 / 6.500664 (-6.301645) | 0.065828 / 0.075469 (-0.009641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429778 / 1.841788 (-0.412009) | 20.967563 / 8.074308 (12.893255) | 19.329723 / 10.191392 (9.138331) | 0.222048 / 0.680424 (-0.458376) | 0.033507 / 0.534201 (-0.500694) | 0.436801 / 0.579283 (-0.142482) | 0.530197 / 0.434364 (0.095833) | 0.491532 / 0.540337 (-0.048805) | 0.718216 / 1.386936 (-0.668720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007798 / 0.011353 (-0.003555) | 0.004748 / 0.011008 (-0.006260) | 0.070847 / 0.038508 (0.032339) | 0.069338 / 0.023109 (0.046229) | 0.400890 / 0.275898 (0.124992) | 0.429482 / 0.323480 (0.106002) | 0.006469 / 0.007986 (-0.001517) | 0.003514 / 0.004328 (-0.000814) | 0.069049 / 0.004250 (0.064798) | 0.059800 / 0.037052 (0.022748) | 0.415644 / 0.258489 (0.157155) | 0.432562 / 0.293841 (0.138721) | 0.043778 / 0.128546 (-0.084768) | 0.015141 / 0.075646 (-0.060506) | 0.081521 / 0.419271 (-0.337750) | 0.054692 / 0.043533 (0.011160) | 0.404497 / 0.255139 (0.149358) | 0.419783 / 0.283200 (0.136583) | 0.029588 / 0.141683 (-0.112094) | 1.593506 / 1.452155 (0.141351) | 1.615977 / 1.492716 (0.123261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270981 / 0.018006 (0.252975) | 0.522074 / 0.000490 (0.521584) | 0.026568 / 0.000200 (0.026368) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031551 / 0.037411 (-0.005861) | 0.086723 / 0.014526 (0.072197) | 0.103315 / 0.176557 (-0.073242) | 0.154692 / 0.737135 (-0.582443) | 0.099472 / 0.296338 (-0.196866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570238 / 0.215209 (0.355029) | 5.655963 / 2.077655 (3.578308) | 2.662670 / 1.504120 (1.158550) | 2.380903 / 1.541195 (0.839709) | 2.409467 / 1.468490 (0.940977) | 0.828055 / 4.584777 (-3.756722) | 4.964698 / 3.745712 (1.218986) | 4.299995 / 5.269862 (-0.969867) | 2.824162 / 4.565676 (-1.741514) | 0.095872 / 0.424275 (-0.328403) | 0.007907 / 0.007607 (0.000300) | 0.701595 / 0.226044 (0.475551) | 7.131965 / 2.268929 (4.863036) | 3.250554 / 55.444624 (-52.194070) | 2.531916 / 6.876477 (-4.344561) | 2.717908 / 2.142072 (0.575835) | 1.014479 / 4.805227 (-3.790748) | 0.223804 / 6.500664 (-6.276861) | 0.071893 / 0.075469 (-0.003576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541702 / 1.841788 (-0.300086) | 21.668219 / 8.074308 (13.593911) | 18.916032 / 10.191392 (8.724640) | 0.205915 / 0.680424 (-0.474508) | 0.026356 / 0.534201 (-0.507845) | 0.429122 / 0.579283 (-0.150161) | 0.506110 / 0.434364 (0.071746) | 0.510148 / 0.540337 (-0.030190) | 0.724699 / 1.386936 (-0.662237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4ca93ff86551b398c979862e7be7305725a240b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006884 / 0.011353 (-0.004469) | 0.004492 / 0.011008 (-0.006516) | 0.085439 / 0.038508 (0.046931) | 0.083905 / 0.023109 (0.060796) | 0.313604 / 0.275898 (0.037706) | 0.354683 / 0.323480 (0.031203) | 0.006535 / 0.007986 (-0.001451) | 0.004318 / 0.004328 (-0.000011) | 0.066129 / 0.004250 (0.061879) | 0.057568 / 0.037052 (0.020516) | 0.317162 / 0.258489 (0.058672) | 0.372501 / 0.293841 (0.078660) | 0.031059 / 0.128546 (-0.097488) | 0.009013 / 0.075646 (-0.066634) | 0.288794 / 0.419271 (-0.130478) | 0.053326 / 0.043533 (0.009793) | 0.314318 / 0.255139 (0.059179) | 0.357505 / 0.283200 (0.074305) | 0.027020 / 0.141683 (-0.114663) | 1.530653 / 1.452155 (0.078498) | 1.599782 / 1.492716 (0.107066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278788 / 0.018006 (0.260782) | 0.626822 / 0.000490 (0.626333) | 0.003780 / 0.000200 (0.003580) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031703 / 0.037411 (-0.005708) | 0.085654 / 0.014526 (0.071128) | 0.754858 / 0.176557 (0.578301) | 0.212251 / 0.737135 (-0.524885) | 0.171344 / 0.296338 (-0.124994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382291 / 0.215209 (0.167082) | 3.825612 / 2.077655 (1.747958) | 1.874553 / 1.504120 (0.370433) | 1.712574 / 1.541195 (0.171379) | 1.791479 / 1.468490 (0.322989) | 0.481005 / 4.584777 (-4.103772) | 3.530559 / 3.745712 (-0.215153) | 3.395305 / 5.269862 (-1.874557) | 2.133747 / 4.565676 (-2.431930) | 0.056139 / 0.424275 (-0.368136) | 0.007424 / 0.007607 (-0.000183) | 0.458321 / 0.226044 (0.232277) | 4.577665 / 2.268929 (2.308736) | 2.380233 / 55.444624 (-53.064392) | 2.004060 / 6.876477 (-4.872417) | 2.290712 / 2.142072 (0.148639) | 0.570157 / 4.805227 (-4.235070) | 0.131670 / 6.500664 (-6.368994) | 0.060684 / 0.075469 (-0.014785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294929 / 1.841788 (-0.546858) | 21.386663 / 8.074308 (13.312355) | 14.389440 / 10.191392 (4.198048) | 0.171177 / 0.680424 (-0.509247) | 0.018660 / 0.534201 (-0.515541) | 0.394385 / 0.579283 (-0.184898) | 0.424942 / 0.434364 (-0.009422) | 0.463618 / 0.540337 (-0.076719) | 0.651499 / 1.386936 (-0.735437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007079 / 0.011353 (-0.004274) | 0.004615 / 0.011008 (-0.006393) | 0.066300 / 0.038508 (0.027792) | 0.092636 / 0.023109 (0.069527) | 0.399080 / 0.275898 (0.123182) | 0.429873 / 0.323480 (0.106393) | 0.006689 / 0.007986 (-0.001297) | 0.004358 / 0.004328 (0.000029) | 0.067155 / 0.004250 (0.062905) | 0.064040 / 0.037052 (0.026988) | 0.399905 / 0.258489 (0.141416) | 0.448237 / 0.293841 (0.154397) | 0.031985 / 0.128546 (-0.096561) | 0.009053 / 0.075646 (-0.066593) | 0.071904 / 0.419271 (-0.347368) | 0.048759 / 0.043533 (0.005227) | 0.386797 / 0.255139 (0.131658) | 0.411240 / 0.283200 (0.128040) | 0.028568 / 0.141683 (-0.113115) | 1.501037 / 1.452155 (0.048882) | 1.594560 / 1.492716 (0.101844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300756 / 0.018006 (0.282750) | 0.631220 / 0.000490 (0.630730) | 0.010163 / 0.000200 (0.009963) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033716 / 0.037411 (-0.003695) | 0.093562 / 0.014526 (0.079037) | 0.106975 / 0.176557 (-0.069582) | 0.161919 / 0.737135 (-0.575216) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410392 / 0.215209 (0.195183) | 4.094411 / 2.077655 (2.016756) | 2.085868 / 1.504120 (0.581748) | 1.959589 / 1.541195 (0.418394) | 2.096683 / 1.468490 (0.628193) | 0.494593 / 4.584777 (-4.090184) | 3.854302 / 3.745712 (0.108590) | 3.742303 / 5.269862 (-1.527558) | 2.379983 / 4.565676 (-2.185693) | 0.058640 / 0.424275 (-0.365635) | 0.008092 / 0.007607 (0.000484) | 0.486957 / 0.226044 (0.260912) | 4.855784 / 2.268929 (2.586855) | 2.654029 / 55.444624 (-52.790595) | 2.237627 / 6.876477 (-4.638850) | 2.536955 / 2.142072 (0.394882) | 0.622398 / 4.805227 (-4.182829) | 0.139212 / 6.500664 (-6.361452) | 0.062805 / 0.075469 (-0.012664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374862 / 1.841788 (-0.466926) | 22.797015 / 8.074308 (14.722707) | 14.393995 / 10.191392 (4.202603) | 0.196603 / 0.680424 (-0.483821) | 0.018602 / 0.534201 (-0.515599) | 0.394568 / 0.579283 (-0.184715) | 0.408792 / 0.434364 (-0.025572) | 0.486706 / 0.540337 (-0.053631) | 0.652365 / 1.386936 (-0.734571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5713299a88f527ea162a099c2bf2cbceada8fb86 \"CML watermark\")\n" ]
2023-07-31T16:32:01
2023-08-03T10:13:32
2023-08-03T10:04:18
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6107", "html_url": "https://github.com/huggingface/datasets/pull/6107", "diff_url": "https://github.com/huggingface/datasets/pull/6107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6107.patch", "merged_at": "2023-08-03T10:04:18" }
Fix issues with the deprecation of `use_auth_token` introduced by: - #5996 in functions: - `get_authentication_headers_for_url` - `request_etag` - `get_from_cache` Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588 ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' ``` Related to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6107/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
https://api.github.com/repos/huggingface/datasets/issues/6106/events
https://github.com/huggingface/datasets/issues/6106
1,829,131,223
I_kwDODunzps5tBlPX
6,106
load local json_file as dataset
{ "login": "CiaoHe", "id": 39040787, "node_id": "MDQ6VXNlcjM5MDQwNzg3", "avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CiaoHe", "html_url": "https://github.com/CiaoHe", "followers_url": "https://api.github.com/users/CiaoHe/followers", "following_url": "https://api.github.com/users/CiaoHe/following{/other_user}", "gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}", "starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions", "organizations_url": "https://api.github.com/users/CiaoHe/orgs", "repos_url": "https://api.github.com/users/CiaoHe/repos", "events_url": "https://api.github.com/users/CiaoHe/events{/privacy}", "received_events_url": "https://api.github.com/users/CiaoHe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ", "Thanks for your help!" ]
2023-07-31T12:53:49
2023-08-18T01:46:35
2023-08-18T01:46:35
NONE
null
null
null
### Describe the bug I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type. ### Steps to reproduce the bug 1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)` 2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double ### Expected behavior Should allow some columns are 'float' type, at least it should convert those columns to str type. I tried to avoid the error by naively convert the float item to str: ```python # if col type is not str, we need to convert it to str mapping = {} for col in keys: if isinstance(dataset[0][col], str): mapping[col] = [row.get(col) for row in dataset] else: mapping[col] = [str(row.get(col)) for row in dataset] ``` ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6105/comments
https://api.github.com/repos/huggingface/datasets/issues/6105/events
https://github.com/huggingface/datasets/pull/6105
1,829,008,430
PR_kwDODunzps5WyiJD
6,105
Fix error when loading from GCP bucket
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006706 / 0.011353 (-0.004647) | 0.004016 / 0.011008 (-0.006992) | 0.083696 / 0.038508 (0.045188) | 0.074340 / 0.023109 (0.051230) | 0.327338 / 0.275898 (0.051440) | 0.366663 / 0.323480 (0.043183) | 0.004052 / 0.007986 (-0.003934) | 0.003423 / 0.004328 (-0.000906) | 0.064576 / 0.004250 (0.060326) | 0.055037 / 0.037052 (0.017985) | 0.325089 / 0.258489 (0.066600) | 0.379986 / 0.293841 (0.086145) | 0.031614 / 0.128546 (-0.096932) | 0.008553 / 0.075646 (-0.067094) | 0.287430 / 0.419271 (-0.131841) | 0.053032 / 0.043533 (0.009499) | 0.318990 / 0.255139 (0.063851) | 0.364426 / 0.283200 (0.081226) | 0.024926 / 0.141683 (-0.116757) | 1.461835 / 1.452155 (0.009680) | 1.557172 / 1.492716 (0.064456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212430 / 0.018006 (0.194424) | 0.512891 / 0.000490 (0.512402) | 0.004772 / 0.000200 (0.004572) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027873 / 0.037411 (-0.009538) | 0.085598 / 0.014526 (0.071072) | 0.097330 / 0.176557 (-0.079226) | 0.152235 / 0.737135 (-0.584900) | 0.097787 / 0.296338 (-0.198552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384645 / 0.215209 (0.169436) | 3.841161 / 2.077655 (1.763506) | 1.863696 / 1.504120 (0.359577) | 1.685082 / 1.541195 (0.143887) | 1.772904 / 1.468490 (0.304414) | 0.480177 / 4.584777 (-4.104599) | 3.601537 / 3.745712 (-0.144175) | 3.273647 / 5.269862 (-1.996214) | 2.014415 / 4.565676 (-2.551261) | 0.056668 / 0.424275 (-0.367607) | 0.007257 / 0.007607 (-0.000350) | 0.458194 / 0.226044 (0.232150) | 4.577311 / 2.268929 (2.308382) | 2.333983 / 55.444624 (-53.110641) | 1.964508 / 6.876477 (-4.911969) | 2.193379 / 2.142072 (0.051307) | 0.577557 / 4.805227 (-4.227670) | 0.133899 / 6.500664 (-6.366765) | 0.060804 / 0.075469 (-0.014665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249490 / 1.841788 (-0.592298) | 19.791875 / 8.074308 (11.717567) | 14.418728 / 10.191392 (4.227336) | 0.167788 / 0.680424 (-0.512636) | 0.018993 / 0.534201 (-0.515208) | 0.396141 / 0.579283 (-0.183142) | 0.412427 / 0.434364 (-0.021937) | 0.456718 / 0.540337 (-0.083619) | 0.641383 / 1.386936 (-0.745553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006546 / 0.011353 (-0.004807) | 0.004059 / 0.011008 (-0.006949) | 0.064523 / 0.038508 (0.026015) | 0.074988 / 0.023109 (0.051878) | 0.388932 / 0.275898 (0.113034) | 0.424496 / 0.323480 (0.101016) | 0.005226 / 0.007986 (-0.002760) | 0.003409 / 0.004328 (-0.000920) | 0.064284 / 0.004250 (0.060034) | 0.056829 / 0.037052 (0.019777) | 0.386457 / 0.258489 (0.127968) | 0.428063 / 0.293841 (0.134222) | 0.031411 / 0.128546 (-0.097136) | 0.008577 / 0.075646 (-0.067070) | 0.070357 / 0.419271 (-0.348915) | 0.048920 / 0.043533 (0.005388) | 0.385197 / 0.255139 (0.130058) | 0.407167 / 0.283200 (0.123967) | 0.024469 / 0.141683 (-0.117214) | 1.482733 / 1.452155 (0.030578) | 1.539027 / 1.492716 (0.046311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227532 / 0.018006 (0.209526) | 0.448792 / 0.000490 (0.448302) | 0.004139 / 0.000200 (0.003939) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031004 / 0.037411 (-0.006408) | 0.088163 / 0.014526 (0.073637) | 0.101452 / 0.176557 (-0.075105) | 0.152907 / 0.737135 (-0.584229) | 0.102325 / 0.296338 (-0.194014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418092 / 0.215209 (0.202883) | 4.162277 / 2.077655 (2.084623) | 2.232987 / 1.504120 (0.728867) | 2.143583 / 1.541195 (0.602388) | 2.246142 / 1.468490 (0.777652) | 0.490181 / 4.584777 (-4.094596) | 3.631514 / 3.745712 (-0.114198) | 3.315025 / 5.269862 (-1.954837) | 2.101853 / 4.565676 (-2.463823) | 0.057905 / 0.424275 (-0.366370) | 0.007686 / 0.007607 (0.000079) | 0.489965 / 0.226044 (0.263921) | 4.894375 / 2.268929 (2.625447) | 2.655459 / 55.444624 (-52.789165) | 2.262211 / 6.876477 (-4.614266) | 2.505335 / 2.142072 (0.363263) | 0.591329 / 4.805227 (-4.213898) | 0.133554 / 6.500664 (-6.367110) | 0.061922 / 0.075469 (-0.013547) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347483 / 1.841788 (-0.494304) | 20.027011 / 8.074308 (11.952703) | 14.430737 / 10.191392 (4.239345) | 0.165767 / 0.680424 (-0.514657) | 0.018460 / 0.534201 (-0.515741) | 0.393790 / 0.579283 (-0.185494) | 0.407213 / 0.434364 (-0.027151) | 0.474459 / 0.540337 (-0.065879) | 0.635054 / 1.386936 (-0.751882) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7f575111481e2e2f4d4fc9180771797f69ebcc44 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004581 / 0.011008 (-0.006427) | 0.101629 / 0.038508 (0.063121) | 0.090233 / 0.023109 (0.067124) | 0.392789 / 0.275898 (0.116891) | 0.432163 / 0.323480 (0.108683) | 0.004694 / 0.007986 (-0.003292) | 0.003927 / 0.004328 (-0.000401) | 0.076533 / 0.004250 (0.072282) | 0.064442 / 0.037052 (0.027390) | 0.397539 / 0.258489 (0.139050) | 0.441323 / 0.293841 (0.147482) | 0.036278 / 0.128546 (-0.092268) | 0.009810 / 0.075646 (-0.065836) | 0.343537 / 0.419271 (-0.075734) | 0.060273 / 0.043533 (0.016740) | 0.395023 / 0.255139 (0.139884) | 0.427210 / 0.283200 (0.144011) | 0.031717 / 0.141683 (-0.109966) | 1.771221 / 1.452155 (0.319066) | 1.896336 / 1.492716 (0.403620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235081 / 0.018006 (0.217075) | 0.512781 / 0.000490 (0.512292) | 0.004920 / 0.000200 (0.004721) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033525 / 0.037411 (-0.003887) | 0.104416 / 0.014526 (0.089890) | 0.115695 / 0.176557 (-0.060861) | 0.182216 / 0.737135 (-0.554919) | 0.116259 / 0.296338 (-0.180079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454817 / 0.215209 (0.239608) | 4.527753 / 2.077655 (2.450098) | 2.222273 / 1.504120 (0.718153) | 2.038448 / 1.541195 (0.497253) | 2.179444 / 1.468490 (0.710953) | 0.573665 / 4.584777 (-4.011112) | 4.504943 / 3.745712 (0.759231) | 3.848435 / 5.269862 (-1.421427) | 2.455185 / 4.565676 (-2.110491) | 0.067985 / 0.424275 (-0.356290) | 0.008719 / 0.007607 (0.001112) | 0.552405 / 0.226044 (0.326360) | 5.515251 / 2.268929 (3.246322) | 2.851557 / 55.444624 (-52.593067) | 2.463070 / 6.876477 (-4.413407) | 2.761596 / 2.142072 (0.619524) | 0.688561 / 4.805227 (-4.116667) | 0.159946 / 6.500664 (-6.340718) | 0.075435 / 0.075469 (-0.000034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505178 / 1.841788 (-0.336610) | 23.555236 / 8.074308 (15.480928) | 17.272759 / 10.191392 (7.081367) | 0.206495 / 0.680424 (-0.473928) | 0.021869 / 0.534201 (-0.512332) | 0.469271 / 0.579283 (-0.110012) | 0.469200 / 0.434364 (0.034837) | 0.542437 / 0.540337 (0.002100) | 0.792864 / 1.386936 (-0.594072) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008151 / 0.011353 (-0.003202) | 0.004992 / 0.011008 (-0.006016) | 0.079545 / 0.038508 (0.041037) | 0.100234 / 0.023109 (0.077125) | 0.492791 / 0.275898 (0.216893) | 0.511315 / 0.323480 (0.187835) | 0.006878 / 0.007986 (-0.001108) | 0.003807 / 0.004328 (-0.000522) | 0.080876 / 0.004250 (0.076625) | 0.076734 / 0.037052 (0.039681) | 0.518247 / 0.258489 (0.259758) | 0.524202 / 0.293841 (0.230361) | 0.039896 / 0.128546 (-0.088650) | 0.016581 / 0.075646 (-0.059065) | 0.101228 / 0.419271 (-0.318043) | 0.061990 / 0.043533 (0.018457) | 0.490611 / 0.255139 (0.235472) | 0.514930 / 0.283200 (0.231730) | 0.028680 / 0.141683 (-0.113002) | 1.966215 / 1.452155 (0.514061) | 2.047757 / 1.492716 (0.555040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286807 / 0.018006 (0.268801) | 0.506448 / 0.000490 (0.505959) | 0.005867 / 0.000200 (0.005667) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037141 / 0.037411 (-0.000270) | 0.113232 / 0.014526 (0.098706) | 0.121201 / 0.176557 (-0.055356) | 0.185472 / 0.737135 (-0.551663) | 0.122896 / 0.296338 (-0.173442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514491 / 0.215209 (0.299282) | 4.942457 / 2.077655 (2.864802) | 2.533519 / 1.504120 (1.029399) | 2.371011 / 1.541195 (0.829817) | 2.495604 / 1.468490 (1.027114) | 0.576224 / 4.584777 (-4.008553) | 4.368584 / 3.745712 (0.622872) | 3.885598 / 5.269862 (-1.384263) | 2.443596 / 4.565676 (-2.122080) | 0.068905 / 0.424275 (-0.355371) | 0.009171 / 0.007607 (0.001564) | 0.584977 / 0.226044 (0.358932) | 5.835220 / 2.268929 (3.566291) | 3.189037 / 55.444624 (-52.255588) | 2.753228 / 6.876477 (-4.123249) | 3.009062 / 2.142072 (0.866990) | 0.690179 / 4.805227 (-4.115048) | 0.157981 / 6.500664 (-6.342683) | 0.074518 / 0.075469 (-0.000951) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599907 / 1.841788 (-0.241880) | 23.853903 / 8.074308 (15.779595) | 17.419796 / 10.191392 (7.228404) | 0.204974 / 0.680424 (-0.475450) | 0.022014 / 0.534201 (-0.512187) | 0.473379 / 0.579283 (-0.105905) | 0.461346 / 0.434364 (0.026982) | 0.564881 / 0.540337 (0.024543) | 0.752933 / 1.386936 (-0.634003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f49c9ca993fa600fae0e327636d52657328e7ffb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004805) | 0.004020 / 0.011008 (-0.006988) | 0.086828 / 0.038508 (0.048320) | 0.072924 / 0.023109 (0.049815) | 0.312847 / 0.275898 (0.036949) | 0.344605 / 0.323480 (0.021125) | 0.004117 / 0.007986 (-0.003868) | 0.004365 / 0.004328 (0.000037) | 0.066755 / 0.004250 (0.062505) | 0.053248 / 0.037052 (0.016195) | 0.315744 / 0.258489 (0.057255) | 0.362426 / 0.293841 (0.068585) | 0.030732 / 0.128546 (-0.097814) | 0.008516 / 0.075646 (-0.067130) | 0.289927 / 0.419271 (-0.129345) | 0.052115 / 0.043533 (0.008582) | 0.308026 / 0.255139 (0.052887) | 0.343115 / 0.283200 (0.059915) | 0.024131 / 0.141683 (-0.117551) | 1.464290 / 1.452155 (0.012135) | 1.559359 / 1.492716 (0.066642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216744 / 0.018006 (0.198738) | 0.473156 / 0.000490 (0.472666) | 0.004176 / 0.000200 (0.003977) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028500 / 0.037411 (-0.008911) | 0.083892 / 0.014526 (0.069366) | 0.131851 / 0.176557 (-0.044705) | 0.162202 / 0.737135 (-0.574933) | 0.127989 / 0.296338 (-0.168349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404555 / 0.215209 (0.189346) | 4.035989 / 2.077655 (1.958334) | 2.025174 / 1.504120 (0.521054) | 1.835785 / 1.541195 (0.294590) | 1.909819 / 1.468490 (0.441329) | 0.475352 / 4.584777 (-4.109425) | 3.548055 / 3.745712 (-0.197657) | 3.234782 / 5.269862 (-2.035080) | 2.010305 / 4.565676 (-2.555371) | 0.056507 / 0.424275 (-0.367768) | 0.007259 / 0.007607 (-0.000348) | 0.482021 / 0.226044 (0.255977) | 4.818559 / 2.268929 (2.549631) | 2.528765 / 55.444624 (-52.915860) | 2.159804 / 6.876477 (-4.716673) | 2.380640 / 2.142072 (0.238567) | 0.585005 / 4.805227 (-4.220222) | 0.133811 / 6.500664 (-6.366853) | 0.060686 / 0.075469 (-0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260902 / 1.841788 (-0.580886) | 19.500215 / 8.074308 (11.425907) | 14.164698 / 10.191392 (3.973306) | 0.172492 / 0.680424 (-0.507932) | 0.018221 / 0.534201 (-0.515980) | 0.392609 / 0.579283 (-0.186674) | 0.423265 / 0.434364 (-0.011099) | 0.454705 / 0.540337 (-0.085633) | 0.639856 / 1.386936 (-0.747080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006656 / 0.011353 (-0.004697) | 0.003903 / 0.011008 (-0.007106) | 0.063780 / 0.038508 (0.025272) | 0.076848 / 0.023109 (0.053739) | 0.379429 / 0.275898 (0.103531) | 0.442554 / 0.323480 (0.119074) | 0.005327 / 0.007986 (-0.002658) | 0.003318 / 0.004328 (-0.001010) | 0.064307 / 0.004250 (0.060056) | 0.057183 / 0.037052 (0.020131) | 0.398163 / 0.258489 (0.139674) | 0.448532 / 0.293841 (0.154691) | 0.031322 / 0.128546 (-0.097224) | 0.008462 / 0.075646 (-0.067184) | 0.070354 / 0.419271 (-0.348917) | 0.048420 / 0.043533 (0.004887) | 0.368304 / 0.255139 (0.113165) | 0.428786 / 0.283200 (0.145587) | 0.023921 / 0.141683 (-0.117762) | 1.499281 / 1.452155 (0.047126) | 1.554448 / 1.492716 (0.061731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238830 / 0.018006 (0.220824) | 0.464196 / 0.000490 (0.463706) | 0.004812 / 0.000200 (0.004613) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031642 / 0.037411 (-0.005770) | 0.089205 / 0.014526 (0.074679) | 0.101577 / 0.176557 (-0.074980) | 0.154993 / 0.737135 (-0.582142) | 0.102935 / 0.296338 (-0.193403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415218 / 0.215209 (0.200009) | 4.137711 / 2.077655 (2.060056) | 2.128757 / 1.504120 (0.624637) | 1.961086 / 1.541195 (0.419891) | 2.047552 / 1.468490 (0.579061) | 0.486953 / 4.584777 (-4.097824) | 3.587851 / 3.745712 (-0.157861) | 3.280771 / 5.269862 (-1.989090) | 2.016980 / 4.565676 (-2.548697) | 0.057284 / 0.424275 (-0.366991) | 0.007705 / 0.007607 (0.000097) | 0.492242 / 0.226044 (0.266197) | 4.923213 / 2.268929 (2.654285) | 2.672528 / 55.444624 (-52.772097) | 2.292862 / 6.876477 (-4.583614) | 2.517410 / 2.142072 (0.375337) | 0.614798 / 4.805227 (-4.190429) | 0.149642 / 6.500664 (-6.351023) | 0.062898 / 0.075469 (-0.012571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323266 / 1.841788 (-0.518522) | 19.891504 / 8.074308 (11.817196) | 14.115069 / 10.191392 (3.923677) | 0.169859 / 0.680424 (-0.510564) | 0.018538 / 0.534201 (-0.515663) | 0.398456 / 0.579283 (-0.180827) | 0.410111 / 0.434364 (-0.024253) | 0.483198 / 0.540337 (-0.057139) | 0.639283 / 1.386936 (-0.747653) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01e2194f2aab6aa98686a2069ee5201b69a53c14 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007731 / 0.011353 (-0.003622) | 0.004064 / 0.011008 (-0.006944) | 0.095261 / 0.038508 (0.056753) | 0.081594 / 0.023109 (0.058485) | 0.390413 / 0.275898 (0.114515) | 0.415542 / 0.323480 (0.092063) | 0.006031 / 0.007986 (-0.001954) | 0.003817 / 0.004328 (-0.000512) | 0.066381 / 0.004250 (0.062131) | 0.058262 / 0.037052 (0.021210) | 0.383626 / 0.258489 (0.125137) | 0.443237 / 0.293841 (0.149396) | 0.034358 / 0.128546 (-0.094188) | 0.010002 / 0.075646 (-0.065644) | 0.317472 / 0.419271 (-0.101800) | 0.057428 / 0.043533 (0.013895) | 0.393929 / 0.255139 (0.138790) | 0.444572 / 0.283200 (0.161373) | 0.026295 / 0.141683 (-0.115388) | 1.603639 / 1.452155 (0.151484) | 1.707750 / 1.492716 (0.215034) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222171 / 0.018006 (0.204165) | 0.491762 / 0.000490 (0.491272) | 0.003389 / 0.000200 (0.003189) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029420 / 0.037411 (-0.007991) | 0.086201 / 0.014526 (0.071676) | 0.100150 / 0.176557 (-0.076406) | 0.162338 / 0.737135 (-0.574797) | 0.099349 / 0.296338 (-0.196989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445976 / 0.215209 (0.230767) | 4.460197 / 2.077655 (2.382542) | 2.211767 / 1.504120 (0.707647) | 1.988740 / 1.541195 (0.447545) | 2.052289 / 1.468490 (0.583799) | 0.570321 / 4.584777 (-4.014456) | 4.148777 / 3.745712 (0.403065) | 3.750977 / 5.269862 (-1.518885) | 2.309443 / 4.565676 (-2.256234) | 0.064552 / 0.424275 (-0.359724) | 0.008167 / 0.007607 (0.000560) | 0.523283 / 0.226044 (0.297238) | 5.349347 / 2.268929 (3.080419) | 2.710292 / 55.444624 (-52.734332) | 2.344252 / 6.876477 (-4.532225) | 2.549903 / 2.142072 (0.407831) | 0.665942 / 4.805227 (-4.139285) | 0.154108 / 6.500664 (-6.346556) | 0.070181 / 0.075469 (-0.005289) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455733 / 1.841788 (-0.386054) | 21.846958 / 8.074308 (13.772650) | 15.133865 / 10.191392 (4.942473) | 0.199009 / 0.680424 (-0.481415) | 0.021299 / 0.534201 (-0.512902) | 0.421555 / 0.579283 (-0.157729) | 0.437639 / 0.434364 (0.003275) | 0.498568 / 0.540337 (-0.041769) | 0.719649 / 1.386936 (-0.667287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007858 / 0.011353 (-0.003495) | 0.004629 / 0.011008 (-0.006380) | 0.075701 / 0.038508 (0.037193) | 0.084425 / 0.023109 (0.061316) | 0.436650 / 0.275898 (0.160752) | 0.466046 / 0.323480 (0.142566) | 0.006042 / 0.007986 (-0.001944) | 0.003834 / 0.004328 (-0.000495) | 0.074729 / 0.004250 (0.070478) | 0.065983 / 0.037052 (0.028931) | 0.447239 / 0.258489 (0.188750) | 0.466728 / 0.293841 (0.172887) | 0.035814 / 0.128546 (-0.092733) | 0.009919 / 0.075646 (-0.065727) | 0.081151 / 0.419271 (-0.338120) | 0.057256 / 0.043533 (0.013723) | 0.435609 / 0.255139 (0.180470) | 0.448901 / 0.283200 (0.165701) | 0.026325 / 0.141683 (-0.115357) | 1.745658 / 1.452155 (0.293503) | 1.804137 / 1.492716 (0.311421) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302551 / 0.018006 (0.284544) | 0.498438 / 0.000490 (0.497948) | 0.038562 / 0.000200 (0.038362) | 0.000411 / 0.000054 (0.000356) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035573 / 0.037411 (-0.001839) | 0.104957 / 0.014526 (0.090431) | 0.117208 / 0.176557 (-0.059349) | 0.178935 / 0.737135 (-0.558200) | 0.124577 / 0.296338 (-0.171761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467076 / 0.215209 (0.251867) | 4.698852 / 2.077655 (2.621197) | 2.453389 / 1.504120 (0.949269) | 2.257378 / 1.541195 (0.716183) | 2.338615 / 1.468490 (0.870125) | 0.542379 / 4.584777 (-4.042398) | 4.066895 / 3.745712 (0.321183) | 3.689540 / 5.269862 (-1.580321) | 2.268997 / 4.565676 (-2.296679) | 0.064754 / 0.424275 (-0.359521) | 0.008866 / 0.007607 (0.001259) | 0.546732 / 0.226044 (0.320687) | 5.487765 / 2.268929 (3.218836) | 2.974126 / 55.444624 (-52.470498) | 2.585492 / 6.876477 (-4.290985) | 2.754417 / 2.142072 (0.612345) | 0.652045 / 4.805227 (-4.153183) | 0.145597 / 6.500664 (-6.355067) | 0.065415 / 0.075469 (-0.010054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553970 / 1.841788 (-0.287818) | 22.300954 / 8.074308 (14.226646) | 15.640990 / 10.191392 (5.449598) | 0.170903 / 0.680424 (-0.509521) | 0.021750 / 0.534201 (-0.512451) | 0.455316 / 0.579283 (-0.123967) | 0.455051 / 0.434364 (0.020687) | 0.536174 / 0.540337 (-0.004164) | 0.735930 / 1.386936 (-0.651006) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f68139846c26b43631bd235114854f4bf6cb9954 \"CML watermark\")\n" ]
2023-07-31T11:44:46
2023-08-01T10:48:52
2023-08-01T10:38:54
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6105", "html_url": "https://github.com/huggingface/datasets/pull/6105", "diff_url": "https://github.com/huggingface/datasets/pull/6105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6105.patch", "merged_at": "2023-08-01T10:38:54" }
Fix `resolve_pattern` for filesystems with tuple protocol. Fix #6100. The bug code lines were introduced by: - #6028
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6105/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
https://api.github.com/repos/huggingface/datasets/issues/6104/events
https://github.com/huggingface/datasets/issues/6104
1,828,959,107
I_kwDODunzps5tA7OD
6,104
HF Datasets data access is extremely slow even when in memory
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462" ]
2023-07-31T11:12:19
2023-08-01T11:22:43
null
CONTRIBUTOR
null
null
null
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6103/comments
https://api.github.com/repos/huggingface/datasets/issues/6103/events
https://github.com/huggingface/datasets/pull/6103
1,828,515,165
PR_kwDODunzps5Ww2gV
6,103
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006528 / 0.011353 (-0.004825) | 0.003909 / 0.011008 (-0.007099) | 0.083954 / 0.038508 (0.045446) | 0.070513 / 0.023109 (0.047404) | 0.344362 / 0.275898 (0.068464) | 0.370278 / 0.323480 (0.046798) | 0.005395 / 0.007986 (-0.002591) | 0.003323 / 0.004328 (-0.001005) | 0.064538 / 0.004250 (0.060288) | 0.055616 / 0.037052 (0.018564) | 0.353590 / 0.258489 (0.095101) | 0.382159 / 0.293841 (0.088318) | 0.031133 / 0.128546 (-0.097414) | 0.008429 / 0.075646 (-0.067217) | 0.288665 / 0.419271 (-0.130606) | 0.052626 / 0.043533 (0.009093) | 0.347676 / 0.255139 (0.092537) | 0.363726 / 0.283200 (0.080526) | 0.021956 / 0.141683 (-0.119727) | 1.506091 / 1.452155 (0.053936) | 1.563940 / 1.492716 (0.071223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207658 / 0.018006 (0.189652) | 0.473411 / 0.000490 (0.472922) | 0.005437 / 0.000200 (0.005237) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027769 / 0.037411 (-0.009643) | 0.082566 / 0.014526 (0.068040) | 0.092700 / 0.176557 (-0.083857) | 0.152589 / 0.737135 (-0.584546) | 0.093772 / 0.296338 (-0.202566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401072 / 0.215209 (0.185863) | 3.997922 / 2.077655 (1.920267) | 2.028223 / 1.504120 (0.524103) | 1.845229 / 1.541195 (0.304035) | 1.883980 / 1.468490 (0.415489) | 0.485112 / 4.584777 (-4.099665) | 3.657048 / 3.745712 (-0.088664) | 4.998475 / 5.269862 (-0.271386) | 3.007417 / 4.565676 (-1.558259) | 0.057003 / 0.424275 (-0.367272) | 0.007270 / 0.007607 (-0.000338) | 0.482220 / 0.226044 (0.256176) | 4.817560 / 2.268929 (2.548631) | 2.484285 / 55.444624 (-52.960340) | 2.163327 / 6.876477 (-4.713149) | 2.326412 / 2.142072 (0.184339) | 0.600349 / 4.805227 (-4.204878) | 0.134245 / 6.500664 (-6.366419) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281440 / 1.841788 (-0.560347) | 19.165591 / 8.074308 (11.091283) | 14.007728 / 10.191392 (3.816336) | 0.168367 / 0.680424 (-0.512057) | 0.018149 / 0.534201 (-0.516052) | 0.391688 / 0.579283 (-0.187595) | 0.414528 / 0.434364 (-0.019836) | 0.456964 / 0.540337 (-0.083373) | 0.613807 / 1.386936 (-0.773129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004851) | 0.003956 / 0.011008 (-0.007052) | 0.064297 / 0.038508 (0.025789) | 0.073430 / 0.023109 (0.050321) | 0.364113 / 0.275898 (0.088215) | 0.389021 / 0.323480 (0.065541) | 0.005375 / 0.007986 (-0.002611) | 0.003363 / 0.004328 (-0.000966) | 0.064404 / 0.004250 (0.060153) | 0.056664 / 0.037052 (0.019612) | 0.365504 / 0.258489 (0.107015) | 0.398477 / 0.293841 (0.104636) | 0.031739 / 0.128546 (-0.096807) | 0.008663 / 0.075646 (-0.066984) | 0.070757 / 0.419271 (-0.348515) | 0.051014 / 0.043533 (0.007481) | 0.368287 / 0.255139 (0.113148) | 0.382941 / 0.283200 (0.099742) | 0.024642 / 0.141683 (-0.117041) | 1.516721 / 1.452155 (0.064567) | 1.557625 / 1.492716 (0.064908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208248 / 0.018006 (0.190242) | 0.443560 / 0.000490 (0.443070) | 0.004004 / 0.000200 (0.003805) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006295) | 0.086814 / 0.014526 (0.072288) | 0.099111 / 0.176557 (-0.077445) | 0.155032 / 0.737135 (-0.582104) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413080 / 0.215209 (0.197871) | 4.115546 / 2.077655 (2.037891) | 2.162073 / 1.504120 (0.657953) | 2.008107 / 1.541195 (0.466912) | 2.052317 / 1.468490 (0.583827) | 0.485158 / 4.584777 (-4.099619) | 3.617478 / 3.745712 (-0.128234) | 5.030564 / 5.269862 (-0.239298) | 2.787812 / 4.565676 (-1.777865) | 0.057466 / 0.424275 (-0.366809) | 0.007656 / 0.007607 (0.000049) | 0.490037 / 0.226044 (0.263993) | 4.887896 / 2.268929 (2.618968) | 2.639644 / 55.444624 (-52.804981) | 2.258051 / 6.876477 (-4.618426) | 2.417573 / 2.142072 (0.275500) | 0.604473 / 4.805227 (-4.200754) | 0.134770 / 6.500664 (-6.365894) | 0.061709 / 0.075469 (-0.013760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342500 / 1.841788 (-0.499288) | 19.354990 / 8.074308 (11.280682) | 14.161975 / 10.191392 (3.970583) | 0.157084 / 0.680424 (-0.523339) | 0.018227 / 0.534201 (-0.515974) | 0.391819 / 0.579283 (-0.187464) | 0.399157 / 0.434364 (-0.035207) | 0.460582 / 0.540337 (-0.079756) | 0.612183 / 1.386936 (-0.774753) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b20f6a82410dd47e89585bb932616a22e0eaf2e6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009318 / 0.011353 (-0.002035) | 0.005515 / 0.011008 (-0.005493) | 0.108532 / 0.038508 (0.070024) | 0.103583 / 0.023109 (0.080473) | 0.419249 / 0.275898 (0.143351) | 0.453573 / 0.323480 (0.130093) | 0.006601 / 0.007986 (-0.001384) | 0.005297 / 0.004328 (0.000968) | 0.082737 / 0.004250 (0.078487) | 0.064708 / 0.037052 (0.027656) | 0.425679 / 0.258489 (0.167190) | 0.462028 / 0.293841 (0.168187) | 0.048104 / 0.128546 (-0.080442) | 0.014069 / 0.075646 (-0.061577) | 0.377780 / 0.419271 (-0.041491) | 0.067510 / 0.043533 (0.023977) | 0.422421 / 0.255139 (0.167282) | 0.447127 / 0.283200 (0.163927) | 0.037745 / 0.141683 (-0.103938) | 1.855306 / 1.452155 (0.403152) | 1.943876 / 1.492716 (0.451160) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280161 / 0.018006 (0.262155) | 0.598001 / 0.000490 (0.597512) | 0.001130 / 0.000200 (0.000930) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036064 / 0.037411 (-0.001347) | 0.113256 / 0.014526 (0.098730) | 0.120598 / 0.176557 (-0.055959) | 0.191386 / 0.737135 (-0.545750) | 0.118125 / 0.296338 (-0.178214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616887 / 0.215209 (0.401678) | 6.085498 / 2.077655 (4.007844) | 2.639428 / 1.504120 (1.135308) | 2.215444 / 1.541195 (0.674249) | 2.311990 / 1.468490 (0.843500) | 0.820539 / 4.584777 (-3.764238) | 5.306010 / 3.745712 (1.560298) | 4.731726 / 5.269862 (-0.538136) | 3.053933 / 4.565676 (-1.511744) | 0.098862 / 0.424275 (-0.325413) | 0.009456 / 0.007607 (0.001849) | 0.725455 / 0.226044 (0.499411) | 7.367385 / 2.268929 (5.098457) | 3.464921 / 55.444624 (-51.979703) | 2.833868 / 6.876477 (-4.042608) | 3.033008 / 2.142072 (0.890935) | 1.036751 / 4.805227 (-3.768476) | 0.243646 / 6.500664 (-6.257018) | 0.081079 / 0.075469 (0.005610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584695 / 1.841788 (-0.257093) | 25.150355 / 8.074308 (17.076047) | 21.826622 / 10.191392 (11.635230) | 0.212502 / 0.680424 (-0.467921) | 0.029865 / 0.534201 (-0.504335) | 0.496814 / 0.579283 (-0.082470) | 0.611959 / 0.434364 (0.177595) | 0.550434 / 0.540337 (0.010097) | 0.800897 / 1.386936 (-0.586039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005236 / 0.011008 (-0.005772) | 0.082402 / 0.038508 (0.043894) | 0.090578 / 0.023109 (0.067468) | 0.487302 / 0.275898 (0.211404) | 0.523639 / 0.323480 (0.200159) | 0.006684 / 0.007986 (-0.001302) | 0.004306 / 0.004328 (-0.000023) | 0.083273 / 0.004250 (0.079023) | 0.068585 / 0.037052 (0.031532) | 0.487751 / 0.258489 (0.229262) | 0.538972 / 0.293841 (0.245131) | 0.048915 / 0.128546 (-0.079632) | 0.014312 / 0.075646 (-0.061335) | 0.091863 / 0.419271 (-0.327409) | 0.066114 / 0.043533 (0.022581) | 0.483552 / 0.255139 (0.228413) | 0.522250 / 0.283200 (0.239050) | 0.038533 / 0.141683 (-0.103150) | 1.803834 / 1.452155 (0.351680) | 1.891927 / 1.492716 (0.399211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336662 / 0.018006 (0.318656) | 0.611408 / 0.000490 (0.610918) | 0.014310 / 0.000200 (0.014110) | 0.000152 / 0.000054 (0.000097) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034755 / 0.037411 (-0.002656) | 0.101008 / 0.014526 (0.086483) | 0.124530 / 0.176557 (-0.052026) | 0.179844 / 0.737135 (-0.557292) | 0.125027 / 0.296338 (-0.171312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618341 / 0.215209 (0.403132) | 6.146848 / 2.077655 (4.069193) | 2.893305 / 1.504120 (1.389185) | 2.608722 / 1.541195 (1.067528) | 2.671276 / 1.468490 (1.202786) | 0.860096 / 4.584777 (-3.724681) | 5.440671 / 3.745712 (1.694959) | 4.776958 / 5.269862 (-0.492903) | 3.098300 / 4.565676 (-1.467376) | 0.098664 / 0.424275 (-0.325611) | 0.009270 / 0.007607 (0.001663) | 0.712780 / 0.226044 (0.486735) | 7.199721 / 2.268929 (4.930793) | 3.620723 / 55.444624 (-51.823902) | 3.052218 / 6.876477 (-3.824259) | 3.321093 / 2.142072 (1.179021) | 1.070992 / 4.805227 (-3.734235) | 0.224091 / 6.500664 (-6.276573) | 0.083395 / 0.075469 (0.007926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716867 / 1.841788 (-0.124921) | 25.534617 / 8.074308 (17.460309) | 25.221014 / 10.191392 (15.029621) | 0.248098 / 0.680424 (-0.432326) | 0.029659 / 0.534201 (-0.504542) | 0.492929 / 0.579283 (-0.086355) | 0.618253 / 0.434364 (0.183889) | 0.577108 / 0.540337 (0.036771) | 0.803188 / 1.386936 (-0.583748) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#584db360eed9155e173b199ba5fc037562b7b862 \"CML watermark\")\n" ]
2023-07-31T06:44:05
2023-07-31T06:55:58
2023-07-31T06:45:41
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6103", "html_url": "https://github.com/huggingface/datasets/pull/6103", "diff_url": "https://github.com/huggingface/datasets/pull/6103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6103.patch", "merged_at": "2023-07-31T06:45:41" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6103/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6102/comments
https://api.github.com/repos/huggingface/datasets/issues/6102/events
https://github.com/huggingface/datasets/pull/6102
1,828,494,896
PR_kwDODunzps5WwyGy
6,102
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006517 / 0.011353 (-0.004836) | 0.004217 / 0.011008 (-0.006792) | 0.083162 / 0.038508 (0.044654) | 0.074476 / 0.023109 (0.051367) | 0.321193 / 0.275898 (0.045295) | 0.358348 / 0.323480 (0.034868) | 0.005531 / 0.007986 (-0.002455) | 0.003621 / 0.004328 (-0.000707) | 0.063819 / 0.004250 (0.059568) | 0.056524 / 0.037052 (0.019471) | 0.322145 / 0.258489 (0.063656) | 0.371415 / 0.293841 (0.077574) | 0.030612 / 0.128546 (-0.097934) | 0.008907 / 0.075646 (-0.066739) | 0.289451 / 0.419271 (-0.129821) | 0.051959 / 0.043533 (0.008426) | 0.317729 / 0.255139 (0.062590) | 0.339750 / 0.283200 (0.056550) | 0.022430 / 0.141683 (-0.119253) | 1.487661 / 1.452155 (0.035506) | 1.554916 / 1.492716 (0.062199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296673 / 0.018006 (0.278667) | 0.599183 / 0.000490 (0.598694) | 0.002524 / 0.000200 (0.002324) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027898 / 0.037411 (-0.009514) | 0.080870 / 0.014526 (0.066344) | 0.094894 / 0.176557 (-0.081662) | 0.152350 / 0.737135 (-0.584785) | 0.095765 / 0.296338 (-0.200573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415442 / 0.215209 (0.200233) | 4.161155 / 2.077655 (2.083500) | 2.117061 / 1.504120 (0.612941) | 1.937846 / 1.541195 (0.396651) | 1.979635 / 1.468490 (0.511145) | 0.488381 / 4.584777 (-4.096396) | 3.509836 / 3.745712 (-0.235876) | 3.833074 / 5.269862 (-1.436788) | 2.307536 / 4.565676 (-2.258141) | 0.057059 / 0.424275 (-0.367216) | 0.007366 / 0.007607 (-0.000241) | 0.487752 / 0.226044 (0.261708) | 4.869406 / 2.268929 (2.600478) | 2.594775 / 55.444624 (-52.849849) | 2.191712 / 6.876477 (-4.684765) | 2.413220 / 2.142072 (0.271147) | 0.584513 / 4.805227 (-4.220714) | 0.132162 / 6.500664 (-6.368502) | 0.061059 / 0.075469 (-0.014410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245178 / 1.841788 (-0.596610) | 20.624563 / 8.074308 (12.550255) | 14.675545 / 10.191392 (4.484153) | 0.165838 / 0.680424 (-0.514586) | 0.018700 / 0.534201 (-0.515501) | 0.392475 / 0.579283 (-0.186808) | 0.399884 / 0.434364 (-0.034480) | 0.457478 / 0.540337 (-0.082859) | 0.624553 / 1.386936 (-0.762383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006716 / 0.011353 (-0.004637) | 0.004308 / 0.011008 (-0.006700) | 0.064495 / 0.038508 (0.025987) | 0.083194 / 0.023109 (0.060085) | 0.371994 / 0.275898 (0.096096) | 0.433045 / 0.323480 (0.109566) | 0.005535 / 0.007986 (-0.002450) | 0.003469 / 0.004328 (-0.000859) | 0.064342 / 0.004250 (0.060092) | 0.059362 / 0.037052 (0.022309) | 0.393819 / 0.258489 (0.135330) | 0.442591 / 0.293841 (0.148750) | 0.031594 / 0.128546 (-0.096952) | 0.008943 / 0.075646 (-0.066703) | 0.070689 / 0.419271 (-0.348582) | 0.049219 / 0.043533 (0.005686) | 0.361568 / 0.255139 (0.106429) | 0.417085 / 0.283200 (0.133886) | 0.025112 / 0.141683 (-0.116571) | 1.497204 / 1.452155 (0.045049) | 1.552781 / 1.492716 (0.060064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325254 / 0.018006 (0.307248) | 0.528399 / 0.000490 (0.527909) | 0.007429 / 0.000200 (0.007229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029908 / 0.037411 (-0.007504) | 0.087114 / 0.014526 (0.072588) | 0.103366 / 0.176557 (-0.073191) | 0.155145 / 0.737135 (-0.581990) | 0.103458 / 0.296338 (-0.192880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.093327 / 2.077655 (2.015673) | 2.154115 / 1.504120 (0.649995) | 1.953492 / 1.541195 (0.412297) | 2.021532 / 1.468490 (0.553042) | 0.478928 / 4.584777 (-4.105849) | 3.515287 / 3.745712 (-0.230426) | 4.976239 / 5.269862 (-0.293623) | 2.832803 / 4.565676 (-1.732873) | 0.057239 / 0.424275 (-0.367036) | 0.007718 / 0.007607 (0.000111) | 0.484102 / 0.226044 (0.258057) | 4.833020 / 2.268929 (2.564092) | 2.564550 / 55.444624 (-52.880074) | 2.268969 / 6.876477 (-4.607508) | 2.513308 / 2.142072 (0.371235) | 0.582822 / 4.805227 (-4.222406) | 0.133989 / 6.500664 (-6.366675) | 0.062078 / 0.075469 (-0.013391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.393766 / 1.841788 (-0.448021) | 20.224546 / 8.074308 (12.150238) | 14.359438 / 10.191392 (4.168046) | 0.166358 / 0.680424 (-0.514066) | 0.018840 / 0.534201 (-0.515361) | 0.393206 / 0.579283 (-0.186077) | 0.404220 / 0.434364 (-0.030144) | 0.462346 / 0.540337 (-0.077992) | 0.603078 / 1.386936 (-0.783858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006835 / 0.011353 (-0.004518) | 0.004530 / 0.011008 (-0.006478) | 0.087506 / 0.038508 (0.048997) | 0.088289 / 0.023109 (0.065180) | 0.351575 / 0.275898 (0.075677) | 0.391873 / 0.323480 (0.068393) | 0.005627 / 0.007986 (-0.002359) | 0.003735 / 0.004328 (-0.000594) | 0.065747 / 0.004250 (0.061497) | 0.058779 / 0.037052 (0.021726) | 0.358076 / 0.258489 (0.099587) | 0.408466 / 0.293841 (0.114626) | 0.031369 / 0.128546 (-0.097178) | 0.008807 / 0.075646 (-0.066839) | 0.293253 / 0.419271 (-0.126019) | 0.052950 / 0.043533 (0.009417) | 0.350411 / 0.255139 (0.095272) | 0.384827 / 0.283200 (0.101627) | 0.026219 / 0.141683 (-0.115464) | 1.464290 / 1.452155 (0.012136) | 1.549688 / 1.492716 (0.056972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270354 / 0.018006 (0.252348) | 0.593436 / 0.000490 (0.592946) | 0.003872 / 0.000200 (0.003673) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031625 / 0.037411 (-0.005787) | 0.092599 / 0.014526 (0.078073) | 0.104619 / 0.176557 (-0.071938) | 0.163183 / 0.737135 (-0.573952) | 0.103245 / 0.296338 (-0.193094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390213 / 0.215209 (0.175004) | 3.894519 / 2.077655 (1.816864) | 1.905739 / 1.504120 (0.401619) | 1.728873 / 1.541195 (0.187678) | 1.838692 / 1.468490 (0.370202) | 0.484730 / 4.584777 (-4.100047) | 3.706749 / 3.745712 (-0.038963) | 5.572311 / 5.269862 (0.302449) | 3.389949 / 4.565676 (-1.175727) | 0.057315 / 0.424275 (-0.366960) | 0.007475 / 0.007607 (-0.000132) | 0.464690 / 0.226044 (0.238645) | 4.622242 / 2.268929 (2.353314) | 2.380957 / 55.444624 (-53.063667) | 2.038225 / 6.876477 (-4.838251) | 2.358881 / 2.142072 (0.216809) | 0.606358 / 4.805227 (-4.198869) | 0.133584 / 6.500664 (-6.367080) | 0.061894 / 0.075469 (-0.013575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259575 / 1.841788 (-0.582213) | 20.915216 / 8.074308 (12.840908) | 14.971952 / 10.191392 (4.780560) | 0.160206 / 0.680424 (-0.520218) | 0.018675 / 0.534201 (-0.515526) | 0.396821 / 0.579283 (-0.182462) | 0.430982 / 0.434364 (-0.003382) | 0.452895 / 0.540337 (-0.087443) | 0.647869 / 1.386936 (-0.739067) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004158) | 0.004340 / 0.011008 (-0.006669) | 0.065125 / 0.038508 (0.026617) | 0.096243 / 0.023109 (0.073134) | 0.374361 / 0.275898 (0.098463) | 0.411863 / 0.323480 (0.088383) | 0.005813 / 0.007986 (-0.002172) | 0.003615 / 0.004328 (-0.000713) | 0.064953 / 0.004250 (0.060703) | 0.063171 / 0.037052 (0.026119) | 0.376238 / 0.258489 (0.117749) | 0.415826 / 0.293841 (0.121985) | 0.031926 / 0.128546 (-0.096620) | 0.008821 / 0.075646 (-0.066825) | 0.072150 / 0.419271 (-0.347122) | 0.049484 / 0.043533 (0.005951) | 0.369691 / 0.255139 (0.114552) | 0.390669 / 0.283200 (0.107470) | 0.025732 / 0.141683 (-0.115950) | 1.493833 / 1.452155 (0.041679) | 1.601786 / 1.492716 (0.109070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284279 / 0.018006 (0.266272) | 0.585909 / 0.000490 (0.585419) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033642 / 0.037411 (-0.003769) | 0.095328 / 0.014526 (0.080802) | 0.105810 / 0.176557 (-0.070746) | 0.159779 / 0.737135 (-0.577357) | 0.108938 / 0.296338 (-0.187400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408112 / 0.215209 (0.192902) | 4.067035 / 2.077655 (1.989380) | 2.114504 / 1.504120 (0.610384) | 1.944027 / 1.541195 (0.402832) | 2.066117 / 1.468490 (0.597627) | 0.486441 / 4.584777 (-4.098336) | 3.622659 / 3.745712 (-0.123053) | 3.399310 / 5.269862 (-1.870552) | 2.183151 / 4.565676 (-2.382525) | 0.057490 / 0.424275 (-0.366785) | 0.007955 / 0.007607 (0.000347) | 0.490221 / 0.226044 (0.264177) | 4.887301 / 2.268929 (2.618373) | 2.679806 / 55.444624 (-52.764819) | 2.258992 / 6.876477 (-4.617484) | 2.592493 / 2.142072 (0.450420) | 0.606515 / 4.805227 (-4.198712) | 0.135645 / 6.500664 (-6.365019) | 0.063956 / 0.075469 (-0.011513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331304 / 1.841788 (-0.510483) | 21.458611 / 8.074308 (13.384303) | 14.898964 / 10.191392 (4.707572) | 0.172110 / 0.680424 (-0.508314) | 0.018791 / 0.534201 (-0.515409) | 0.395944 / 0.579283 (-0.183339) | 0.424526 / 0.434364 (-0.009838) | 0.462517 / 0.540337 (-0.077821) | 0.610139 / 1.386936 (-0.776797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005957 / 0.011353 (-0.005396) | 0.003581 / 0.011008 (-0.007427) | 0.079624 / 0.038508 (0.041116) | 0.058004 / 0.023109 (0.034895) | 0.309345 / 0.275898 (0.033447) | 0.346653 / 0.323480 (0.023173) | 0.005420 / 0.007986 (-0.002566) | 0.002906 / 0.004328 (-0.001423) | 0.061970 / 0.004250 (0.057720) | 0.047627 / 0.037052 (0.010575) | 0.314096 / 0.258489 (0.055607) | 0.361368 / 0.293841 (0.067527) | 0.027211 / 0.128546 (-0.101335) | 0.007853 / 0.075646 (-0.067793) | 0.260202 / 0.419271 (-0.159070) | 0.045308 / 0.043533 (0.001775) | 0.312150 / 0.255139 (0.057011) | 0.341085 / 0.283200 (0.057886) | 0.021302 / 0.141683 (-0.120381) | 1.430315 / 1.452155 (-0.021840) | 1.608989 / 1.492716 (0.116273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185289 / 0.018006 (0.167283) | 0.423318 / 0.000490 (0.422828) | 0.005741 / 0.000200 (0.005541) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023777 / 0.037411 (-0.013634) | 0.071937 / 0.014526 (0.057412) | 0.079406 / 0.176557 (-0.097151) | 0.143815 / 0.737135 (-0.593320) | 0.081648 / 0.296338 (-0.214690) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431514 / 0.215209 (0.216305) | 4.314471 / 2.077655 (2.236817) | 2.305167 / 1.504120 (0.801047) | 2.137894 / 1.541195 (0.596699) | 2.161034 / 1.468490 (0.692544) | 0.511701 / 4.584777 (-4.073076) | 3.098213 / 3.745712 (-0.647499) | 4.086837 / 5.269862 (-1.183024) | 2.517184 / 4.565676 (-2.048492) | 0.058272 / 0.424275 (-0.366003) | 0.006415 / 0.007607 (-0.001192) | 0.504792 / 0.226044 (0.278747) | 5.046758 / 2.268929 (2.777829) | 2.752049 / 55.444624 (-52.692576) | 2.407707 / 6.876477 (-4.468770) | 2.532162 / 2.142072 (0.390090) | 0.597562 / 4.805227 (-4.207666) | 0.125935 / 6.500664 (-6.374729) | 0.060837 / 0.075469 (-0.014632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257048 / 1.841788 (-0.584740) | 17.877849 / 8.074308 (9.803541) | 13.904805 / 10.191392 (3.713413) | 0.131647 / 0.680424 (-0.548776) | 0.016975 / 0.534201 (-0.517226) | 0.329651 / 0.579283 (-0.249633) | 0.354358 / 0.434364 (-0.080006) | 0.377545 / 0.540337 (-0.162792) | 0.545593 / 1.386936 (-0.841343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005839 / 0.011353 (-0.005514) | 0.003580 / 0.011008 (-0.007428) | 0.062204 / 0.038508 (0.023696) | 0.057943 / 0.023109 (0.034834) | 0.400165 / 0.275898 (0.124267) | 0.427911 / 0.323480 (0.104431) | 0.004412 / 0.007986 (-0.003574) | 0.002794 / 0.004328 (-0.001534) | 0.062933 / 0.004250 (0.058683) | 0.046243 / 0.037052 (0.009191) | 0.413640 / 0.258489 (0.155151) | 0.418592 / 0.293841 (0.124751) | 0.027020 / 0.128546 (-0.101526) | 0.007927 / 0.075646 (-0.067720) | 0.067581 / 0.419271 (-0.351691) | 0.041927 / 0.043533 (-0.001606) | 0.381863 / 0.255139 (0.126724) | 0.415711 / 0.283200 (0.132511) | 0.019827 / 0.141683 (-0.121856) | 1.464049 / 1.452155 (0.011894) | 1.528387 / 1.492716 (0.035671) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224999 / 0.018006 (0.206993) | 0.419167 / 0.000490 (0.418678) | 0.000363 / 0.000200 (0.000163) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024827 / 0.037411 (-0.012585) | 0.077134 / 0.014526 (0.062608) | 0.085142 / 0.176557 (-0.091414) | 0.137400 / 0.737135 (-0.599735) | 0.086434 / 0.296338 (-0.209905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452716 / 0.215209 (0.237507) | 4.530610 / 2.077655 (2.452955) | 2.467309 / 1.504120 (0.963189) | 2.300441 / 1.541195 (0.759246) | 2.323475 / 1.468490 (0.854985) | 0.501847 / 4.584777 (-4.082930) | 3.079432 / 3.745712 (-0.666280) | 2.793107 / 5.269862 (-2.476755) | 1.835010 / 4.565676 (-2.730666) | 0.057698 / 0.424275 (-0.366577) | 0.006756 / 0.007607 (-0.000851) | 0.529062 / 0.226044 (0.303017) | 5.287822 / 2.268929 (3.018894) | 2.908411 / 55.444624 (-52.536214) | 2.571627 / 6.876477 (-4.304850) | 2.691188 / 2.142072 (0.549116) | 0.592289 / 4.805227 (-4.212938) | 0.126091 / 6.500664 (-6.374573) | 0.062312 / 0.075469 (-0.013157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328854 / 1.841788 (-0.512933) | 18.185628 / 8.074308 (10.111320) | 13.858781 / 10.191392 (3.667389) | 0.142421 / 0.680424 (-0.538003) | 0.016535 / 0.534201 (-0.517666) | 0.330839 / 0.579283 (-0.248444) | 0.346559 / 0.434364 (-0.087805) | 0.389153 / 0.540337 (-0.151185) | 0.516897 / 1.386936 (-0.870039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n" ]
2023-07-31T06:27:47
2023-07-31T06:48:09
2023-07-31T06:32:58
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6102", "html_url": "https://github.com/huggingface/datasets/pull/6102", "diff_url": "https://github.com/huggingface/datasets/pull/6102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6102.patch", "merged_at": "2023-07-31T06:32:58" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6102/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6101/comments
https://api.github.com/repos/huggingface/datasets/issues/6101/events
https://github.com/huggingface/datasets/pull/6101
1,828,469,648
PR_kwDODunzps5WwspW
6,101
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004810) | 0.003894 / 0.011008 (-0.007115) | 0.084742 / 0.038508 (0.046234) | 0.072942 / 0.023109 (0.049833) | 0.310722 / 0.275898 (0.034824) | 0.346806 / 0.323480 (0.023326) | 0.005373 / 0.007986 (-0.002613) | 0.003270 / 0.004328 (-0.001059) | 0.064379 / 0.004250 (0.060128) | 0.054876 / 0.037052 (0.017824) | 0.316794 / 0.258489 (0.058305) | 0.350353 / 0.293841 (0.056512) | 0.030683 / 0.128546 (-0.097863) | 0.008275 / 0.075646 (-0.067371) | 0.288747 / 0.419271 (-0.130525) | 0.051892 / 0.043533 (0.008359) | 0.315060 / 0.255139 (0.059921) | 0.331664 / 0.283200 (0.048464) | 0.023334 / 0.141683 (-0.118349) | 1.499734 / 1.452155 (0.047579) | 1.542006 / 1.492716 (0.049290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210488 / 0.018006 (0.192482) | 0.462187 / 0.000490 (0.461697) | 0.001280 / 0.000200 (0.001080) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027812 / 0.037411 (-0.009599) | 0.082492 / 0.014526 (0.067966) | 0.096504 / 0.176557 (-0.080053) | 0.158164 / 0.737135 (-0.578972) | 0.096678 / 0.296338 (-0.199661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403317 / 0.215209 (0.188108) | 4.008367 / 2.077655 (1.930713) | 2.033067 / 1.504120 (0.528947) | 1.869484 / 1.541195 (0.328290) | 1.947450 / 1.468490 (0.478960) | 0.494048 / 4.584777 (-4.090729) | 3.631673 / 3.745712 (-0.114039) | 5.322167 / 5.269862 (0.052306) | 3.125570 / 4.565676 (-1.440107) | 0.057341 / 0.424275 (-0.366934) | 0.007318 / 0.007607 (-0.000289) | 0.483990 / 0.226044 (0.257945) | 4.830573 / 2.268929 (2.561645) | 2.543267 / 55.444624 (-52.901358) | 2.217890 / 6.876477 (-4.658587) | 2.435111 / 2.142072 (0.293038) | 0.597920 / 4.805227 (-4.207307) | 0.132690 / 6.500664 (-6.367974) | 0.060160 / 0.075469 (-0.015309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247656 / 1.841788 (-0.594131) | 19.436984 / 8.074308 (11.362675) | 14.504249 / 10.191392 (4.312857) | 0.167444 / 0.680424 (-0.512980) | 0.018214 / 0.534201 (-0.515987) | 0.394790 / 0.579283 (-0.184493) | 0.413770 / 0.434364 (-0.020594) | 0.474290 / 0.540337 (-0.066048) | 0.646782 / 1.386936 (-0.740154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006575 / 0.011353 (-0.004778) | 0.003924 / 0.011008 (-0.007084) | 0.064402 / 0.038508 (0.025893) | 0.072569 / 0.023109 (0.049460) | 0.361981 / 0.275898 (0.086083) | 0.398660 / 0.323480 (0.075180) | 0.005380 / 0.007986 (-0.002605) | 0.003355 / 0.004328 (-0.000974) | 0.065173 / 0.004250 (0.060923) | 0.057120 / 0.037052 (0.020067) | 0.366347 / 0.258489 (0.107858) | 0.402723 / 0.293841 (0.108882) | 0.031258 / 0.128546 (-0.097288) | 0.008499 / 0.075646 (-0.067147) | 0.070558 / 0.419271 (-0.348714) | 0.050089 / 0.043533 (0.006556) | 0.361280 / 0.255139 (0.106141) | 0.384497 / 0.283200 (0.101297) | 0.024789 / 0.141683 (-0.116893) | 1.492577 / 1.452155 (0.040422) | 1.572242 / 1.492716 (0.079525) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228054 / 0.018006 (0.210048) | 0.448317 / 0.000490 (0.447828) | 0.000368 / 0.000200 (0.000168) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.088604 / 0.014526 (0.074078) | 0.099317 / 0.176557 (-0.077239) | 0.152455 / 0.737135 (-0.584680) | 0.100444 / 0.296338 (-0.195894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411876 / 0.215209 (0.196667) | 4.108187 / 2.077655 (2.030532) | 2.096371 / 1.504120 (0.592251) | 1.923532 / 1.541195 (0.382337) | 1.998345 / 1.468490 (0.529855) | 0.483853 / 4.584777 (-4.100924) | 3.622433 / 3.745712 (-0.123279) | 3.254430 / 5.269862 (-2.015431) | 2.044342 / 4.565676 (-2.521334) | 0.056756 / 0.424275 (-0.367519) | 0.007720 / 0.007607 (0.000113) | 0.487656 / 0.226044 (0.261612) | 4.882024 / 2.268929 (2.613096) | 2.585008 / 55.444624 (-52.859616) | 2.229251 / 6.876477 (-4.647225) | 2.408318 / 2.142072 (0.266246) | 0.617537 / 4.805227 (-4.187691) | 0.132102 / 6.500664 (-6.368562) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362077 / 1.841788 (-0.479711) | 19.750714 / 8.074308 (11.676406) | 14.545299 / 10.191392 (4.353907) | 0.168666 / 0.680424 (-0.511758) | 0.018606 / 0.534201 (-0.515595) | 0.394760 / 0.579283 (-0.184523) | 0.410030 / 0.434364 (-0.024334) | 0.464742 / 0.540337 (-0.075596) | 0.610881 / 1.386936 (-0.776055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005836 / 0.011353 (-0.005517) | 0.003493 / 0.011008 (-0.007515) | 0.079877 / 0.038508 (0.041369) | 0.057299 / 0.023109 (0.034190) | 0.332945 / 0.275898 (0.057047) | 0.386615 / 0.323480 (0.063135) | 0.004437 / 0.007986 (-0.003548) | 0.002758 / 0.004328 (-0.001571) | 0.062668 / 0.004250 (0.058418) | 0.046135 / 0.037052 (0.009083) | 0.346160 / 0.258489 (0.087671) | 0.416720 / 0.293841 (0.122879) | 0.026678 / 0.128546 (-0.101868) | 0.007893 / 0.075646 (-0.067753) | 0.260427 / 0.419271 (-0.158845) | 0.044240 / 0.043533 (0.000707) | 0.328101 / 0.255139 (0.072963) | 0.380072 / 0.283200 (0.096872) | 0.020813 / 0.141683 (-0.120870) | 1.400202 / 1.452155 (-0.051952) | 1.475627 / 1.492716 (-0.017089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174479 / 0.018006 (0.156473) | 0.413810 / 0.000490 (0.413320) | 0.003059 / 0.000200 (0.002860) | 0.000212 / 0.000054 (0.000157) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023422 / 0.037411 (-0.013990) | 0.071519 / 0.014526 (0.056993) | 0.080555 / 0.176557 (-0.096001) | 0.143825 / 0.737135 (-0.593311) | 0.081182 / 0.296338 (-0.215157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406858 / 0.215209 (0.191648) | 4.161475 / 2.077655 (2.083820) | 1.991800 / 1.504120 (0.487680) | 1.811224 / 1.541195 (0.270030) | 1.828809 / 1.468490 (0.360318) | 0.504882 / 4.584777 (-4.079895) | 2.985010 / 3.745712 (-0.760703) | 3.984856 / 5.269862 (-1.285006) | 2.477936 / 4.565676 (-2.087740) | 0.057553 / 0.424275 (-0.366722) | 0.006436 / 0.007607 (-0.001172) | 0.488061 / 0.226044 (0.262016) | 4.805501 / 2.268929 (2.536573) | 2.446508 / 55.444624 (-52.998116) | 2.051406 / 6.876477 (-4.825071) | 2.177696 / 2.142072 (0.035623) | 0.588021 / 4.805227 (-4.217207) | 0.125118 / 6.500664 (-6.375546) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197130 / 1.841788 (-0.644658) | 17.867450 / 8.074308 (9.793142) | 13.536895 / 10.191392 (3.345503) | 0.137603 / 0.680424 (-0.542821) | 0.016706 / 0.534201 (-0.517495) | 0.327642 / 0.579283 (-0.251641) | 0.347201 / 0.434364 (-0.087163) | 0.379570 / 0.540337 (-0.160768) | 0.517825 / 1.386936 (-0.869111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005769 / 0.011353 (-0.005584) | 0.003414 / 0.011008 (-0.007594) | 0.063198 / 0.038508 (0.024690) | 0.056020 / 0.023109 (0.032911) | 0.393333 / 0.275898 (0.117435) | 0.421166 / 0.323480 (0.097686) | 0.004360 / 0.007986 (-0.003626) | 0.002860 / 0.004328 (-0.001469) | 0.062712 / 0.004250 (0.058461) | 0.045363 / 0.037052 (0.008311) | 0.413156 / 0.258489 (0.154667) | 0.422897 / 0.293841 (0.129056) | 0.027092 / 0.128546 (-0.101455) | 0.007960 / 0.075646 (-0.067687) | 0.068531 / 0.419271 (-0.350740) | 0.041402 / 0.043533 (-0.002131) | 0.377008 / 0.255139 (0.121869) | 0.409142 / 0.283200 (0.125942) | 0.019707 / 0.141683 (-0.121976) | 1.440556 / 1.452155 (-0.011599) | 1.487403 / 1.492716 (-0.005314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224355 / 0.018006 (0.206349) | 0.397855 / 0.000490 (0.397365) | 0.000363 / 0.000200 (0.000163) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025107 / 0.037411 (-0.012305) | 0.076404 / 0.014526 (0.061878) | 0.083194 / 0.176557 (-0.093362) | 0.135347 / 0.737135 (-0.601789) | 0.084786 / 0.296338 (-0.211553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433024 / 0.215209 (0.217815) | 4.323879 / 2.077655 (2.246224) | 2.263004 / 1.504120 (0.758884) | 2.072053 / 1.541195 (0.530858) | 2.113916 / 1.468490 (0.645426) | 0.502742 / 4.584777 (-4.082035) | 3.001716 / 3.745712 (-0.743996) | 2.777960 / 5.269862 (-2.491901) | 1.826514 / 4.565676 (-2.739162) | 0.057735 / 0.424275 (-0.366540) | 0.006671 / 0.007607 (-0.000937) | 0.503347 / 0.226044 (0.277303) | 5.037308 / 2.268929 (2.768380) | 2.679146 / 55.444624 (-52.765478) | 2.410899 / 6.876477 (-4.465577) | 2.467341 / 2.142072 (0.325268) | 0.589824 / 4.805227 (-4.215403) | 0.125529 / 6.500664 (-6.375135) | 0.061950 / 0.075469 (-0.013520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304128 / 1.841788 (-0.537659) | 17.950215 / 8.074308 (9.875907) | 13.673768 / 10.191392 (3.482376) | 0.129863 / 0.680424 (-0.550561) | 0.016720 / 0.534201 (-0.517481) | 0.329795 / 0.579283 (-0.249488) | 0.339057 / 0.434364 (-0.095307) | 0.382279 / 0.540337 (-0.158059) | 0.507337 / 1.386936 (-0.879599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef05b6f99a2b19990c6f5e4e28d95d28781570db \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006199 / 0.011353 (-0.005154) | 0.003749 / 0.011008 (-0.007259) | 0.080600 / 0.038508 (0.042092) | 0.061017 / 0.023109 (0.037908) | 0.319966 / 0.275898 (0.044067) | 0.354937 / 0.323480 (0.031457) | 0.004854 / 0.007986 (-0.003131) | 0.002996 / 0.004328 (-0.001333) | 0.063100 / 0.004250 (0.058849) | 0.050063 / 0.037052 (0.013011) | 0.316744 / 0.258489 (0.058255) | 0.358001 / 0.293841 (0.064160) | 0.027503 / 0.128546 (-0.101043) | 0.007876 / 0.075646 (-0.067771) | 0.262211 / 0.419271 (-0.157060) | 0.045717 / 0.043533 (0.002184) | 0.317188 / 0.255139 (0.062049) | 0.342404 / 0.283200 (0.059205) | 0.020194 / 0.141683 (-0.121489) | 1.498672 / 1.452155 (0.046517) | 1.545479 / 1.492716 (0.052762) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210985 / 0.018006 (0.192979) | 0.433592 / 0.000490 (0.433102) | 0.002864 / 0.000200 (0.002664) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023463 / 0.037411 (-0.013948) | 0.073375 / 0.014526 (0.058850) | 0.083082 / 0.176557 (-0.093475) | 0.142583 / 0.737135 (-0.594552) | 0.084267 / 0.296338 (-0.212071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412890 / 0.215209 (0.197681) | 4.131421 / 2.077655 (2.053766) | 1.969164 / 1.504120 (0.465044) | 1.772379 / 1.541195 (0.231185) | 1.834154 / 1.468490 (0.365664) | 0.496290 / 4.584777 (-4.088487) | 3.056504 / 3.745712 (-0.689208) | 3.400962 / 5.269862 (-1.868900) | 2.120575 / 4.565676 (-2.445101) | 0.056932 / 0.424275 (-0.367343) | 0.006412 / 0.007607 (-0.001195) | 0.484521 / 0.226044 (0.258477) | 4.817474 / 2.268929 (2.548545) | 2.464075 / 55.444624 (-52.980549) | 2.085056 / 6.876477 (-4.791421) | 2.324516 / 2.142072 (0.182444) | 0.592013 / 4.805227 (-4.213214) | 0.132232 / 6.500664 (-6.368432) | 0.062825 / 0.075469 (-0.012645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228080 / 1.841788 (-0.613708) | 18.555385 / 8.074308 (10.481077) | 13.939565 / 10.191392 (3.748173) | 0.145979 / 0.680424 (-0.534445) | 0.016823 / 0.534201 (-0.517377) | 0.330569 / 0.579283 (-0.248714) | 0.358094 / 0.434364 (-0.076270) | 0.384642 / 0.540337 (-0.155696) | 0.518347 / 1.386936 (-0.868589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006198 / 0.011353 (-0.005155) | 0.003670 / 0.011008 (-0.007338) | 0.062502 / 0.038508 (0.023994) | 0.064339 / 0.023109 (0.041229) | 0.428414 / 0.275898 (0.152516) | 0.463899 / 0.323480 (0.140420) | 0.005524 / 0.007986 (-0.002462) | 0.002915 / 0.004328 (-0.001413) | 0.062521 / 0.004250 (0.058270) | 0.051182 / 0.037052 (0.014130) | 0.431144 / 0.258489 (0.172655) | 0.469465 / 0.293841 (0.175624) | 0.027463 / 0.128546 (-0.101083) | 0.007974 / 0.075646 (-0.067673) | 0.068029 / 0.419271 (-0.351242) | 0.042123 / 0.043533 (-0.001409) | 0.428667 / 0.255139 (0.173528) | 0.455917 / 0.283200 (0.172717) | 0.023264 / 0.141683 (-0.118419) | 1.426986 / 1.452155 (-0.025168) | 1.500049 / 1.492716 (0.007332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207264 / 0.018006 (0.189258) | 0.440738 / 0.000490 (0.440248) | 0.000802 / 0.000200 (0.000602) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026245 / 0.037411 (-0.011166) | 0.078749 / 0.014526 (0.064223) | 0.087873 / 0.176557 (-0.088684) | 0.141518 / 0.737135 (-0.595617) | 0.089811 / 0.296338 (-0.206527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418955 / 0.215209 (0.203746) | 4.177881 / 2.077655 (2.100226) | 2.162678 / 1.504120 (0.658558) | 1.998969 / 1.541195 (0.457775) | 2.066720 / 1.468490 (0.598230) | 0.496850 / 4.584777 (-4.087927) | 3.041179 / 3.745712 (-0.704534) | 4.126039 / 5.269862 (-1.143823) | 2.740507 / 4.565676 (-1.825169) | 0.058025 / 0.424275 (-0.366250) | 0.006846 / 0.007607 (-0.000761) | 0.493281 / 0.226044 (0.267237) | 4.930196 / 2.268929 (2.661268) | 2.685152 / 55.444624 (-52.759472) | 2.378247 / 6.876477 (-4.498230) | 2.469103 / 2.142072 (0.327031) | 0.585346 / 4.805227 (-4.219882) | 0.126099 / 6.500664 (-6.374565) | 0.062946 / 0.075469 (-0.012523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313892 / 1.841788 (-0.527896) | 19.177117 / 8.074308 (11.102809) | 14.081321 / 10.191392 (3.889929) | 0.133948 / 0.680424 (-0.546476) | 0.017128 / 0.534201 (-0.517073) | 0.332241 / 0.579283 (-0.247042) | 0.373218 / 0.434364 (-0.061145) | 0.395308 / 0.540337 (-0.145030) | 0.529883 / 1.386936 (-0.857053) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16f7c7677942083436062b904b74643accb9bcac \"CML watermark\")\n" ]
2023-07-31T06:05:36
2023-07-31T06:33:00
2023-07-31T06:18:17
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6101", "html_url": "https://github.com/huggingface/datasets/pull/6101", "diff_url": "https://github.com/huggingface/datasets/pull/6101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6101.patch", "merged_at": "2023-07-31T06:18:17" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6101/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6100/comments
https://api.github.com/repos/huggingface/datasets/issues/6100/events
https://github.com/huggingface/datasets/issues/6100
1,828,118,930
I_kwDODunzps5s9uGS
6,100
TypeError when loading from GCP bucket
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ", "We have fixed it. We are planning to do a patch release today." ]
2023-07-30T23:03:00
2023-08-03T10:00:48
2023-08-01T10:38:55
NONE
null
null
null
### Describe the bug Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1. ### Steps to reproduce the bug Load any file from a GCP bucket: ```python import datasets datasets.load_dataset("json", data_files=["gs://..."]) ``` The following exception is raised: ```python Traceback (most recent call last): ... packages/datasets/data_files.py", line 335, in resolve_pattern protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else "" TypeError: can only concatenate tuple (not "str") to tuple ``` With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string. ### Expected behavior The file should be loaded without exception. ### Environment info - `datasets` version: 2.14.1 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6100/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6099/comments
https://api.github.com/repos/huggingface/datasets/issues/6099/events
https://github.com/huggingface/datasets/issues/6099
1,827,893,576
I_kwDODunzps5s83FI
6,099
How do i get "amazon_us_reviews
{ "login": "IqraBaluch", "id": 57810189, "node_id": "MDQ6VXNlcjU3ODEwMTg5", "avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IqraBaluch", "html_url": "https://github.com/IqraBaluch", "followers_url": "https://api.github.com/users/IqraBaluch/followers", "following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}", "gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}", "starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions", "organizations_url": "https://api.github.com/users/IqraBaluch/orgs", "repos_url": "https://api.github.com/users/IqraBaluch/repos", "events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}", "received_events_url": "https://api.github.com/users/IqraBaluch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\r\ntry:\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\nexcept Exception as e:\r\n print(record)\r\n```\r\n\r\n⬇️\r\n\r\n```\r\n{'<?xml version=\"1.0\" encoding=\"UTF-8\"?>': '<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>N2HFJ82ZV8SZW9BV</RequestId><HostId>Zw2DQ0V2GdRmvH5qWEpumK4uj5+W8YPcilQbN9fLBr3VqQOcKPHOhUZLG3LcM9X5fkOetxp48Os=</HostId></Error>'}\r\n```", "I'm getting same errors when loading this dataset", "I have figured it out. there was an option of **parquet formated files** i downloaded some from there. ", "this dataset is unfortunately no longer public", "Thanks for reporting, @IqraBaluch.\r\n\r\nWe contacted the authors and unfortunately they reported that Amazon has decided to stop distributing this dataset.", "If anyone still needs this dataset, you could find it on kaggle here : https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset", "Thanks @Maryam-Mostafa ", "@albertvillanova don't tell 'em, we have figured it out. XD", "I noticed that some book data is missing, we can only get Books_v1_02 data. \r\nIs there any way we can get the Books_v1_00 and Books_v1_01? \r\nReally appreciate !!!", "@albertvillanova will this dataset be retired given the data are no longer hosted on S3? What is done in cases such as these?" ]
2023-07-30T11:02:17
2023-08-21T05:08:08
2023-08-10T05:02:35
NONE
null
null
null
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02'] Example of usage: `load_dataset('amazon_us_reviews', 'Wireless_v1_00')`] __________________________________________________________________________ `amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00') print(amazon_us_reviews)` **ERROR** `Generating` train split: 0% 0/960872 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record 1694 writer.write(example, key) 11 frames KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1711 e = e.__context__ -> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1713 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Motivation The dataset I'm using https://huggingface.co/datasets/amazon_us_reviews ### Your contribution What is the best way to load this data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6099/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6098/comments
https://api.github.com/repos/huggingface/datasets/issues/6098/events
https://github.com/huggingface/datasets/pull/6098
1,827,655,071
PR_kwDODunzps5WuCn1
6,098
Expanduser in save_to_disk()
{ "login": "Unknown3141592", "id": 51715864, "node_id": "MDQ6VXNlcjUxNzE1ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Unknown3141592", "html_url": "https://github.com/Unknown3141592", "followers_url": "https://api.github.com/users/Unknown3141592/followers", "following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}", "gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}", "starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions", "organizations_url": "https://api.github.com/users/Unknown3141592/orgs", "repos_url": "https://api.github.com/users/Unknown3141592/repos", "events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}", "received_events_url": "https://api.github.com/users/Unknown3141592/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-29T20:50:45
2023-07-29T20:58:57
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6098", "html_url": "https://github.com/huggingface/datasets/pull/6098", "diff_url": "https://github.com/huggingface/datasets/pull/6098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6098.patch", "merged_at": null }
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too. I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6098/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6097/comments
https://api.github.com/repos/huggingface/datasets/issues/6097/events
https://github.com/huggingface/datasets/issues/6097
1,827,054,143
I_kwDODunzps5s5qI_
6,097
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
{ "login": "aschoenauer-sebag", "id": 2538048, "node_id": "MDQ6VXNlcjI1MzgwNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aschoenauer-sebag", "html_url": "https://github.com/aschoenauer-sebag", "followers_url": "https://api.github.com/users/aschoenauer-sebag/followers", "following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}", "gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}", "starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions", "organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs", "repos_url": "https://api.github.com/users/aschoenauer-sebag/repos", "events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}", "received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it." ]
2023-07-28T20:31:59
2023-07-28T20:49:58
2023-07-28T20:49:58
NONE
null
null
null
### Describe the bug Hi team! I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc. Are you able to reproduce what I observe? ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature: ``` {'vectors': array([[random values ...]])} ``` ### Expected behavior The expected behavior happens when the `set_format` method is not called: ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) # foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere ``` {'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6097/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6096/comments
https://api.github.com/repos/huggingface/datasets/issues/6096/events
https://github.com/huggingface/datasets/pull/6096
1,826,731,091
PR_kwDODunzps5Wq9Hb
6,096
Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6096). All of your documentation changes will be reflected on that endpoint.", "Hi @mariosasko and/or @lhoestq, friendly pining you guys here! Let me know if there's anything else to be included within this PR, thanks! 🤗 " ]
2023-07-28T16:36:59
2023-09-06T13:58:09
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6096", "html_url": "https://github.com/huggingface/datasets/pull/6096", "diff_url": "https://github.com/huggingface/datasets/pull/6096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6096.patch", "merged_at": null }
Hi to whoever is reading this! 🤗 (Most likely @mariosasko) ## What's in this PR? This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as requested at #6086. ## What's missing in this PR? As per `to_json`, `to_csv`, and `to_parquet` docstrings for the recently included `storage_options` arg, I've scoped it to 2.15.0, so we should check that before merging in case we want to scope that for 2.14.2 instead. Additionally, should we also add `fsspec` support for the `from_csv`, `from_json`, and `from_parquet` methods? If you want me to do so @mariosasko just let me know and I'll create another PR to support that too!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6096/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6095/comments
https://api.github.com/repos/huggingface/datasets/issues/6095/events
https://github.com/huggingface/datasets/pull/6095
1,826,496,967
PR_kwDODunzps5WqJtr
6,095
Fix deprecation of errors in TextConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012497 / 0.011353 (0.001144) | 0.005355 / 0.011008 (-0.005654) | 0.106018 / 0.038508 (0.067510) | 0.093069 / 0.023109 (0.069960) | 0.394699 / 0.275898 (0.118801) | 0.449723 / 0.323480 (0.126243) | 0.006434 / 0.007986 (-0.001552) | 0.004187 / 0.004328 (-0.000141) | 0.079620 / 0.004250 (0.075370) | 0.062513 / 0.037052 (0.025460) | 0.410305 / 0.258489 (0.151816) | 0.467231 / 0.293841 (0.173390) | 0.048130 / 0.128546 (-0.080416) | 0.013747 / 0.075646 (-0.061899) | 0.357979 / 0.419271 (-0.061293) | 0.064764 / 0.043533 (0.021231) | 0.411029 / 0.255139 (0.155890) | 0.454734 / 0.283200 (0.171534) | 0.037215 / 0.141683 (-0.104468) | 1.801331 / 1.452155 (0.349176) | 1.951628 / 1.492716 (0.458912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231073 / 0.018006 (0.213067) | 0.564179 / 0.000490 (0.563689) | 0.000947 / 0.000200 (0.000747) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030629 / 0.037411 (-0.006783) | 0.092522 / 0.014526 (0.077996) | 0.109781 / 0.176557 (-0.066775) | 0.183185 / 0.737135 (-0.553950) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600095 / 0.215209 (0.384886) | 6.072868 / 2.077655 (3.995213) | 2.684109 / 1.504120 (1.179989) | 2.436204 / 1.541195 (0.895010) | 2.514667 / 1.468490 (1.046177) | 0.865455 / 4.584777 (-3.719322) | 5.245561 / 3.745712 (1.499849) | 5.628688 / 5.269862 (0.358826) | 3.457343 / 4.565676 (-1.108333) | 0.107563 / 0.424275 (-0.316712) | 0.008803 / 0.007607 (0.001196) | 0.754014 / 0.226044 (0.527970) | 7.341226 / 2.268929 (5.072297) | 3.482090 / 55.444624 (-51.962534) | 2.726071 / 6.876477 (-4.150406) | 3.168494 / 2.142072 (1.026422) | 1.023517 / 4.805227 (-3.781710) | 0.207440 / 6.500664 (-6.293224) | 0.073642 / 0.075469 (-0.001827) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.588636 / 1.841788 (-0.253152) | 23.305257 / 8.074308 (15.230949) | 22.071476 / 10.191392 (11.880084) | 0.242044 / 0.680424 (-0.438379) | 0.028830 / 0.534201 (-0.505371) | 0.461414 / 0.579283 (-0.117869) | 0.591024 / 0.434364 (0.156660) | 0.548984 / 0.540337 (0.008646) | 0.783318 / 1.386936 (-0.603618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008724 / 0.011353 (-0.002629) | 0.004638 / 0.011008 (-0.006371) | 0.081024 / 0.038508 (0.042516) | 0.077533 / 0.023109 (0.054423) | 0.444827 / 0.275898 (0.168929) | 0.507812 / 0.323480 (0.184332) | 0.006017 / 0.007986 (-0.001968) | 0.004204 / 0.004328 (-0.000124) | 0.082154 / 0.004250 (0.077904) | 0.063818 / 0.037052 (0.026765) | 0.463468 / 0.258489 (0.204979) | 0.536784 / 0.293841 (0.242943) | 0.046393 / 0.128546 (-0.082153) | 0.014349 / 0.075646 (-0.061298) | 0.089213 / 0.419271 (-0.330059) | 0.058313 / 0.043533 (0.014780) | 0.463674 / 0.255139 (0.208535) | 0.495865 / 0.283200 (0.212665) | 0.036586 / 0.141683 (-0.105096) | 1.801601 / 1.452155 (0.349447) | 1.871219 / 1.492716 (0.378502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273411 / 0.018006 (0.255405) | 0.531745 / 0.000490 (0.531255) | 0.000424 / 0.000200 (0.000224) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037689 / 0.037411 (0.000278) | 0.109544 / 0.014526 (0.095019) | 0.124053 / 0.176557 (-0.052504) | 0.179960 / 0.737135 (-0.557175) | 0.118218 / 0.296338 (-0.178120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639859 / 0.215209 (0.424650) | 6.347385 / 2.077655 (4.269730) | 2.910188 / 1.504120 (1.406068) | 2.698821 / 1.541195 (1.157626) | 2.802652 / 1.468490 (1.334161) | 0.816109 / 4.584777 (-3.768668) | 5.190313 / 3.745712 (1.444601) | 4.642684 / 5.269862 (-0.627178) | 2.948092 / 4.565676 (-1.617584) | 0.095877 / 0.424275 (-0.328398) | 0.009631 / 0.007607 (0.002024) | 0.779136 / 0.226044 (0.553091) | 7.611586 / 2.268929 (5.342658) | 3.760804 / 55.444624 (-51.683820) | 3.139355 / 6.876477 (-3.737122) | 3.419660 / 2.142072 (1.277587) | 1.036397 / 4.805227 (-3.768831) | 0.224015 / 6.500664 (-6.276649) | 0.084037 / 0.075469 (0.008568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.710608 / 1.841788 (-0.131179) | 24.447646 / 8.074308 (16.373338) | 21.345322 / 10.191392 (11.153930) | 0.232383 / 0.680424 (-0.448040) | 0.026381 / 0.534201 (-0.507820) | 0.475995 / 0.579283 (-0.103289) | 0.611939 / 0.434364 (0.177575) | 0.541441 / 0.540337 (0.001104) | 0.742796 / 1.386936 (-0.644140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7929929525e734f7232cfc68d1d22fb8d53c54a3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006140 / 0.011353 (-0.005213) | 0.003664 / 0.011008 (-0.007344) | 0.080765 / 0.038508 (0.042257) | 0.065009 / 0.023109 (0.041900) | 0.312787 / 0.275898 (0.036889) | 0.354637 / 0.323480 (0.031157) | 0.004846 / 0.007986 (-0.003140) | 0.003019 / 0.004328 (-0.001310) | 0.062823 / 0.004250 (0.058573) | 0.050446 / 0.037052 (0.013394) | 0.314478 / 0.258489 (0.055989) | 0.360206 / 0.293841 (0.066365) | 0.027282 / 0.128546 (-0.101265) | 0.008024 / 0.075646 (-0.067622) | 0.262125 / 0.419271 (-0.157146) | 0.045793 / 0.043533 (0.002260) | 0.310508 / 0.255139 (0.055369) | 0.340899 / 0.283200 (0.057699) | 0.021850 / 0.141683 (-0.119833) | 1.510791 / 1.452155 (0.058636) | 1.570661 / 1.492716 (0.077944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.449310 / 0.000490 (0.448820) | 0.004556 / 0.000200 (0.004356) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023689 / 0.037411 (-0.013722) | 0.076316 / 0.014526 (0.061791) | 0.084800 / 0.176557 (-0.091757) | 0.153154 / 0.737135 (-0.583981) | 0.086467 / 0.296338 (-0.209871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432254 / 0.215209 (0.217045) | 4.305098 / 2.077655 (2.227443) | 2.304267 / 1.504120 (0.800147) | 2.139503 / 1.541195 (0.598309) | 2.220414 / 1.468490 (0.751924) | 0.498595 / 4.584777 (-4.086182) | 3.058593 / 3.745712 (-0.687119) | 4.324501 / 5.269862 (-0.945361) | 2.667731 / 4.565676 (-1.897946) | 0.059917 / 0.424275 (-0.364358) | 0.006829 / 0.007607 (-0.000778) | 0.504608 / 0.226044 (0.278564) | 5.044480 / 2.268929 (2.775552) | 2.753080 / 55.444624 (-52.691545) | 2.449265 / 6.876477 (-4.427212) | 2.635113 / 2.142072 (0.493040) | 0.590760 / 4.805227 (-4.214467) | 0.130133 / 6.500664 (-6.370532) | 0.062759 / 0.075469 (-0.012710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267014 / 1.841788 (-0.574773) | 18.562890 / 8.074308 (10.488581) | 13.991257 / 10.191392 (3.799865) | 0.147108 / 0.680424 (-0.533315) | 0.017216 / 0.534201 (-0.516985) | 0.330317 / 0.579283 (-0.248966) | 0.351328 / 0.434364 (-0.083036) | 0.381097 / 0.540337 (-0.159241) | 0.558718 / 1.386936 (-0.828218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006385 / 0.011353 (-0.004967) | 0.003668 / 0.011008 (-0.007340) | 0.062581 / 0.038508 (0.024073) | 0.067006 / 0.023109 (0.043896) | 0.428465 / 0.275898 (0.152567) | 0.466106 / 0.323480 (0.142626) | 0.005806 / 0.007986 (-0.002180) | 0.003117 / 0.004328 (-0.001212) | 0.063554 / 0.004250 (0.059303) | 0.054404 / 0.037052 (0.017352) | 0.431168 / 0.258489 (0.172679) | 0.467578 / 0.293841 (0.173737) | 0.027779 / 0.128546 (-0.100767) | 0.008055 / 0.075646 (-0.067592) | 0.067718 / 0.419271 (-0.351554) | 0.043042 / 0.043533 (-0.000491) | 0.425926 / 0.255139 (0.170787) | 0.453699 / 0.283200 (0.170500) | 0.023495 / 0.141683 (-0.118187) | 1.435356 / 1.452155 (-0.016799) | 1.509340 / 1.492716 (0.016624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242322 / 0.018006 (0.224316) | 0.446865 / 0.000490 (0.446376) | 0.001079 / 0.000200 (0.000879) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025376 / 0.037411 (-0.012035) | 0.079373 / 0.014526 (0.064847) | 0.088554 / 0.176557 (-0.088002) | 0.141026 / 0.737135 (-0.596109) | 0.090666 / 0.296338 (-0.205672) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434800 / 0.215209 (0.219590) | 4.314491 / 2.077655 (2.236836) | 2.320688 / 1.504120 (0.816568) | 2.163941 / 1.541195 (0.622747) | 2.292576 / 1.468490 (0.824086) | 0.500226 / 4.584777 (-4.084551) | 3.114604 / 3.745712 (-0.631108) | 4.206997 / 5.269862 (-1.062864) | 2.461126 / 4.565676 (-2.104551) | 0.057717 / 0.424275 (-0.366558) | 0.006989 / 0.007607 (-0.000618) | 0.515623 / 0.226044 (0.289579) | 5.155301 / 2.268929 (2.886372) | 2.733589 / 55.444624 (-52.711035) | 2.542111 / 6.876477 (-4.334366) | 2.697035 / 2.142072 (0.554963) | 0.594213 / 4.805227 (-4.211014) | 0.128537 / 6.500664 (-6.372127) | 0.065223 / 0.075469 (-0.010246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306738 / 1.841788 (-0.535050) | 19.065370 / 8.074308 (10.991062) | 14.242096 / 10.191392 (4.050704) | 0.146177 / 0.680424 (-0.534246) | 0.017186 / 0.534201 (-0.517015) | 0.337224 / 0.579283 (-0.242059) | 0.349997 / 0.434364 (-0.084367) | 0.390408 / 0.540337 (-0.149930) | 0.524597 / 1.386936 (-0.862339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69ec36948b0ef1f194e9dcd43ec53a50b7708962 \"CML watermark\")\n" ]
2023-07-28T14:08:37
2023-07-31T05:26:32
2023-07-31T05:17:38
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6095", "html_url": "https://github.com/huggingface/datasets/pull/6095", "diff_url": "https://github.com/huggingface/datasets/pull/6095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6095.patch", "merged_at": "2023-07-31T05:17:38" }
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-701c27131a5d> in <module> ----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict") ~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, ~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1831 # Instantiate the dataset builder -> 1832 builder_instance: DatasetBuilder = builder_cls( 1833 cache_dir=cache_dir, 1834 dataset_name=dataset_name, ~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs) 371 if data_dir is not None: 372 config_kwargs["data_dir"] = data_dir --> 373 self.config, self.config_id = self._create_builder_config( 374 config_name=config_name, 375 custom_features=features, ~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs) 550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 551 config_kwargs["version"] = self.VERSION --> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 553 554 # otherwise use the config_kwargs to overwrite the attributes TypeError: __init__() got an unexpected keyword argument 'errors' ``` Similar to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6095/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6094/comments
https://api.github.com/repos/huggingface/datasets/issues/6094/events
https://github.com/huggingface/datasets/pull/6094
1,826,293,414
PR_kwDODunzps5WpdpA
6,094
Fix deprecation of use_auth_token in DownloadConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008996 / 0.011353 (-0.002357) | 0.004976 / 0.011008 (-0.006033) | 0.114495 / 0.038508 (0.075987) | 0.083958 / 0.023109 (0.060849) | 0.408395 / 0.275898 (0.132497) | 0.456757 / 0.323480 (0.133278) | 0.006396 / 0.007986 (-0.001589) | 0.004315 / 0.004328 (-0.000014) | 0.093558 / 0.004250 (0.089307) | 0.062067 / 0.037052 (0.025014) | 0.423452 / 0.258489 (0.164963) | 0.463947 / 0.293841 (0.170106) | 0.049934 / 0.128546 (-0.078613) | 0.013937 / 0.075646 (-0.061709) | 0.365809 / 0.419271 (-0.053463) | 0.067382 / 0.043533 (0.023849) | 0.418860 / 0.255139 (0.163721) | 0.463264 / 0.283200 (0.180065) | 0.034392 / 0.141683 (-0.107291) | 1.870685 / 1.452155 (0.418530) | 1.975313 / 1.492716 (0.482597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261748 / 0.018006 (0.243742) | 0.645510 / 0.000490 (0.645020) | 0.000376 / 0.000200 (0.000176) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032129 / 0.037411 (-0.005282) | 0.104309 / 0.014526 (0.089783) | 0.113154 / 0.176557 (-0.063403) | 0.186795 / 0.737135 (-0.550341) | 0.115584 / 0.296338 (-0.180755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577755 / 0.215209 (0.362546) | 5.984988 / 2.077655 (3.907333) | 2.581967 / 1.504120 (1.077848) | 2.305744 / 1.541195 (0.764549) | 2.359618 / 1.468490 (0.891128) | 0.882892 / 4.584777 (-3.701885) | 5.755578 / 3.745712 (2.009866) | 8.718373 / 5.269862 (3.448511) | 5.217586 / 4.565676 (0.651909) | 0.099785 / 0.424275 (-0.324490) | 0.009008 / 0.007607 (0.001401) | 0.730937 / 0.226044 (0.504892) | 7.265309 / 2.268929 (4.996381) | 3.487167 / 55.444624 (-51.957457) | 2.750090 / 6.876477 (-4.126386) | 3.060198 / 2.142072 (0.918125) | 1.069945 / 4.805227 (-3.735282) | 0.227143 / 6.500664 (-6.273521) | 0.083601 / 0.075469 (0.008132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754375 / 1.841788 (-0.087412) | 25.448731 / 8.074308 (17.374423) | 22.385943 / 10.191392 (12.194551) | 0.249921 / 0.680424 (-0.430503) | 0.034138 / 0.534201 (-0.500063) | 0.535170 / 0.579283 (-0.044113) | 0.605474 / 0.434364 (0.171110) | 0.580025 / 0.540337 (0.039688) | 0.810537 / 1.386936 (-0.576399) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005029 / 0.011008 (-0.005979) | 0.082200 / 0.038508 (0.043691) | 0.082386 / 0.023109 (0.059277) | 0.491869 / 0.275898 (0.215971) | 0.546735 / 0.323480 (0.223255) | 0.006893 / 0.007986 (-0.001093) | 0.004571 / 0.004328 (0.000243) | 0.085361 / 0.004250 (0.081111) | 0.063342 / 0.037052 (0.026290) | 0.522522 / 0.258489 (0.264033) | 0.560784 / 0.293841 (0.266943) | 0.047685 / 0.128546 (-0.080861) | 0.017741 / 0.075646 (-0.057905) | 0.098204 / 0.419271 (-0.321067) | 0.062919 / 0.043533 (0.019386) | 0.504005 / 0.255139 (0.248866) | 0.547022 / 0.283200 (0.263823) | 0.033731 / 0.141683 (-0.107952) | 1.869765 / 1.452155 (0.417610) | 1.935867 / 1.492716 (0.443151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304756 / 0.018006 (0.286750) | 0.623647 / 0.000490 (0.623157) | 0.000508 / 0.000200 (0.000308) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043627 / 0.037411 (0.006216) | 0.107183 / 0.014526 (0.092657) | 0.119304 / 0.176557 (-0.057253) | 0.192651 / 0.737135 (-0.544485) | 0.125118 / 0.296338 (-0.171221) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669980 / 0.215209 (0.454771) | 6.566068 / 2.077655 (4.488413) | 3.136271 / 1.504120 (1.632152) | 2.964643 / 1.541195 (1.423448) | 2.936772 / 1.468490 (1.468282) | 0.885205 / 4.584777 (-3.699572) | 5.539062 / 3.745712 (1.793350) | 5.006133 / 5.269862 (-0.263729) | 3.313697 / 4.565676 (-1.251979) | 0.102975 / 0.424275 (-0.321301) | 0.010759 / 0.007607 (0.003152) | 0.791176 / 0.226044 (0.565132) | 7.822195 / 2.268929 (5.553266) | 3.982315 / 55.444624 (-51.462309) | 3.357026 / 6.876477 (-3.519451) | 3.561307 / 2.142072 (1.419234) | 1.056966 / 4.805227 (-3.748261) | 0.220476 / 6.500664 (-6.280188) | 0.090535 / 0.075469 (0.015066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.897984 / 1.841788 (0.056196) | 26.411411 / 8.074308 (18.337103) | 22.951939 / 10.191392 (12.760547) | 0.216091 / 0.680424 (-0.464333) | 0.037005 / 0.534201 (-0.497196) | 0.505585 / 0.579283 (-0.073698) | 0.617794 / 0.434364 (0.183430) | 0.604631 / 0.540337 (0.064293) | 0.826356 / 1.386936 (-0.560580) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca6342c0177adc3a1d114740444e207b8525ed6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006850 / 0.011353 (-0.004503) | 0.004062 / 0.011008 (-0.006947) | 0.086587 / 0.038508 (0.048079) | 0.079587 / 0.023109 (0.056478) | 0.353601 / 0.275898 (0.077702) | 0.396399 / 0.323480 (0.072919) | 0.004182 / 0.007986 (-0.003804) | 0.004445 / 0.004328 (0.000117) | 0.065100 / 0.004250 (0.060849) | 0.057386 / 0.037052 (0.020334) | 0.356945 / 0.258489 (0.098456) | 0.407093 / 0.293841 (0.113252) | 0.031949 / 0.128546 (-0.096597) | 0.008525 / 0.075646 (-0.067121) | 0.291310 / 0.419271 (-0.127961) | 0.053638 / 0.043533 (0.010105) | 0.359381 / 0.255139 (0.104242) | 0.399473 / 0.283200 (0.116273) | 0.025880 / 0.141683 (-0.115803) | 1.487604 / 1.452155 (0.035449) | 1.550528 / 1.492716 (0.057812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201106 / 0.018006 (0.183099) | 0.457538 / 0.000490 (0.457048) | 0.003995 / 0.000200 (0.003795) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030365 / 0.037411 (-0.007046) | 0.088064 / 0.014526 (0.073538) | 0.096432 / 0.176557 (-0.080124) | 0.158063 / 0.737135 (-0.579072) | 0.098258 / 0.296338 (-0.198080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405351 / 0.215209 (0.190142) | 4.032639 / 2.077655 (1.954984) | 2.018357 / 1.504120 (0.514237) | 1.848493 / 1.541195 (0.307298) | 1.929401 / 1.468490 (0.460910) | 0.488729 / 4.584777 (-4.096048) | 3.586114 / 3.745712 (-0.159598) | 5.279054 / 5.269862 (0.009193) | 3.113275 / 4.565676 (-1.452402) | 0.057373 / 0.424275 (-0.366902) | 0.007416 / 0.007607 (-0.000191) | 0.485514 / 0.226044 (0.259470) | 4.854389 / 2.268929 (2.585461) | 2.493113 / 55.444624 (-52.951512) | 2.128836 / 6.876477 (-4.747641) | 2.383669 / 2.142072 (0.241596) | 0.588266 / 4.805227 (-4.216962) | 0.133603 / 6.500664 (-6.367061) | 0.061812 / 0.075469 (-0.013657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260841 / 1.841788 (-0.580947) | 20.086954 / 8.074308 (12.012646) | 14.620932 / 10.191392 (4.429540) | 0.161525 / 0.680424 (-0.518899) | 0.018102 / 0.534201 (-0.516099) | 0.393810 / 0.579283 (-0.185473) | 0.406974 / 0.434364 (-0.027390) | 0.462732 / 0.540337 (-0.077606) | 0.634221 / 1.386936 (-0.752715) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004068 / 0.011008 (-0.006940) | 0.068009 / 0.038508 (0.029501) | 0.081298 / 0.023109 (0.058189) | 0.363531 / 0.275898 (0.087633) | 0.408482 / 0.323480 (0.085002) | 0.005601 / 0.007986 (-0.002384) | 0.003385 / 0.004328 (-0.000943) | 0.068043 / 0.004250 (0.063792) | 0.059739 / 0.037052 (0.022687) | 0.374043 / 0.258489 (0.115553) | 0.407219 / 0.293841 (0.113378) | 0.031194 / 0.128546 (-0.097352) | 0.008630 / 0.075646 (-0.067017) | 0.073755 / 0.419271 (-0.345517) | 0.049831 / 0.043533 (0.006298) | 0.363664 / 0.255139 (0.108525) | 0.381515 / 0.283200 (0.098315) | 0.026331 / 0.141683 (-0.115352) | 1.507771 / 1.452155 (0.055617) | 1.554403 / 1.492716 (0.061686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226309 / 0.018006 (0.208302) | 0.452428 / 0.000490 (0.451938) | 0.000937 / 0.000200 (0.000737) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031899 / 0.037411 (-0.005513) | 0.092090 / 0.014526 (0.077564) | 0.100838 / 0.176557 (-0.075718) | 0.153722 / 0.737135 (-0.583413) | 0.101950 / 0.296338 (-0.194389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417879 / 0.215209 (0.202669) | 4.171939 / 2.077655 (2.094284) | 2.312937 / 1.504120 (0.808817) | 2.209991 / 1.541195 (0.668796) | 2.329469 / 1.468490 (0.860979) | 0.484576 / 4.584777 (-4.100201) | 3.659198 / 3.745712 (-0.086514) | 5.255227 / 5.269862 (-0.014634) | 3.047430 / 4.565676 (-1.518247) | 0.057029 / 0.424275 (-0.367246) | 0.007735 / 0.007607 (0.000127) | 0.499962 / 0.226044 (0.273918) | 4.991655 / 2.268929 (2.722727) | 2.755999 / 55.444624 (-52.688625) | 2.374034 / 6.876477 (-4.502443) | 2.599759 / 2.142072 (0.457687) | 0.600319 / 4.805227 (-4.204908) | 0.146176 / 6.500664 (-6.354488) | 0.062328 / 0.075469 (-0.013142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346065 / 1.841788 (-0.495722) | 20.430343 / 8.074308 (12.356035) | 14.632959 / 10.191392 (4.441567) | 0.167007 / 0.680424 (-0.513417) | 0.018588 / 0.534201 (-0.515613) | 0.396015 / 0.579283 (-0.183268) | 0.429384 / 0.434364 (-0.004980) | 0.467746 / 0.540337 (-0.072591) | 0.615166 / 1.386936 (-0.771770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#289bcc2ae9bf98c9414b6846ae603178a1816d3f \"CML watermark\")\n" ]
2023-07-28T11:52:21
2023-07-31T05:08:41
2023-07-31T04:59:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6094", "html_url": "https://github.com/huggingface/datasets/pull/6094", "diff_url": "https://github.com/huggingface/datasets/pull/6094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6094.patch", "merged_at": "2023-07-31T04:59:50" }
This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by: - #5996 ```python In [1]: from datasets import DownloadConfig In [2]: DownloadConfig(use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-41927b449e72> in <module> ----> 1 DownloadConfig(use_auth_token=False) TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ``` ```python In [1]: from datasets import get_dataset_config_names In [2]: get_dataset_config_names("squad", use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4671992ead50> in <module> ----> 1 get_dataset_config_names("squad", use_auth_token=False) ~/huggingface/datasets/src/datasets/inspect.py in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 349 ``` 350 """ --> 351 dataset_module = dataset_module_factory( 352 path, 353 revision=revision, ~/huggingface/datasets/src/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1374 """ 1375 if download_config is None: -> 1376 download_config = DownloadConfig(**download_kwargs) 1377 download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) 1378 download_config.extract_compressed_file = True TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6094/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6093/comments
https://api.github.com/repos/huggingface/datasets/issues/6093/events
https://github.com/huggingface/datasets/pull/6093
1,826,210,490
PR_kwDODunzps5WpLfh
6,093
Deprecate `download_custom`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007498 / 0.011353 (-0.003855) | 0.004158 / 0.011008 (-0.006850) | 0.087568 / 0.038508 (0.049060) | 0.083265 / 0.023109 (0.060156) | 0.378505 / 0.275898 (0.102607) | 0.399025 / 0.323480 (0.075545) | 0.006173 / 0.007986 (-0.001813) | 0.003743 / 0.004328 (-0.000586) | 0.071958 / 0.004250 (0.067707) | 0.059323 / 0.037052 (0.022271) | 0.377084 / 0.258489 (0.118595) | 0.408358 / 0.293841 (0.114517) | 0.035191 / 0.128546 (-0.093356) | 0.009408 / 0.075646 (-0.066238) | 0.312587 / 0.419271 (-0.106685) | 0.058073 / 0.043533 (0.014540) | 0.381977 / 0.255139 (0.126838) | 0.395611 / 0.283200 (0.112411) | 0.024191 / 0.141683 (-0.117491) | 1.572735 / 1.452155 (0.120581) | 1.687186 / 1.492716 (0.194470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208886 / 0.018006 (0.190879) | 0.474625 / 0.000490 (0.474135) | 0.006261 / 0.000200 (0.006061) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031401 / 0.037411 (-0.006011) | 0.086433 / 0.014526 (0.071907) | 0.108405 / 0.176557 (-0.068152) | 0.174564 / 0.737135 (-0.562571) | 0.099932 / 0.296338 (-0.196407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407059 / 0.215209 (0.191850) | 4.102056 / 2.077655 (2.024401) | 1.975397 / 1.504120 (0.471277) | 1.807117 / 1.541195 (0.265922) | 1.908667 / 1.468490 (0.440177) | 0.525880 / 4.584777 (-4.058897) | 3.899639 / 3.745712 (0.153927) | 4.358664 / 5.269862 (-0.911198) | 2.586185 / 4.565676 (-1.979492) | 0.061967 / 0.424275 (-0.362308) | 0.007656 / 0.007607 (0.000049) | 0.504851 / 0.226044 (0.278807) | 5.004429 / 2.268929 (2.735500) | 2.515540 / 55.444624 (-52.929084) | 2.183142 / 6.876477 (-4.693334) | 2.369835 / 2.142072 (0.227763) | 0.623527 / 4.805227 (-4.181700) | 0.145105 / 6.500664 (-6.355559) | 0.063924 / 0.075469 (-0.011546) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472661 / 1.841788 (-0.369126) | 21.781655 / 8.074308 (13.707347) | 15.628820 / 10.191392 (5.437428) | 0.182342 / 0.680424 (-0.498082) | 0.021139 / 0.534201 (-0.513062) | 0.438610 / 0.579283 (-0.140673) | 0.451343 / 0.434364 (0.016979) | 0.563320 / 0.540337 (0.022983) | 0.740976 / 1.386936 (-0.645960) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007492 / 0.011353 (-0.003861) | 0.004429 / 0.011008 (-0.006579) | 0.068517 / 0.038508 (0.030008) | 0.078533 / 0.023109 (0.055424) | 0.383530 / 0.275898 (0.107632) | 0.435061 / 0.323480 (0.111581) | 0.005955 / 0.007986 (-0.002030) | 0.003645 / 0.004328 (-0.000683) | 0.068792 / 0.004250 (0.064541) | 0.062452 / 0.037052 (0.025399) | 0.408768 / 0.258489 (0.150279) | 0.438538 / 0.293841 (0.144697) | 0.032038 / 0.128546 (-0.096508) | 0.009196 / 0.075646 (-0.066450) | 0.074495 / 0.419271 (-0.344776) | 0.051322 / 0.043533 (0.007789) | 0.394458 / 0.255139 (0.139319) | 0.424763 / 0.283200 (0.141564) | 0.024890 / 0.141683 (-0.116793) | 1.568322 / 1.452155 (0.116167) | 1.703903 / 1.492716 (0.211187) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249630 / 0.018006 (0.231624) | 0.471412 / 0.000490 (0.470923) | 0.000435 / 0.000200 (0.000235) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033054 / 0.037411 (-0.004358) | 0.100150 / 0.014526 (0.085624) | 0.101704 / 0.176557 (-0.074853) | 0.164031 / 0.737135 (-0.573104) | 0.112497 / 0.296338 (-0.183841) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487150 / 0.215209 (0.271941) | 4.662335 / 2.077655 (2.584681) | 2.477285 / 1.504120 (0.973165) | 2.294033 / 1.541195 (0.752838) | 2.380143 / 1.468490 (0.911653) | 0.519182 / 4.584777 (-4.065595) | 3.983589 / 3.745712 (0.237877) | 3.669895 / 5.269862 (-1.599967) | 2.267147 / 4.565676 (-2.298529) | 0.063300 / 0.424275 (-0.360975) | 0.008839 / 0.007607 (0.001232) | 0.566766 / 0.226044 (0.340721) | 5.533475 / 2.268929 (3.264546) | 3.033412 / 55.444624 (-52.411212) | 2.701793 / 6.876477 (-4.174684) | 2.899444 / 2.142072 (0.757372) | 0.614236 / 4.805227 (-4.190991) | 0.139533 / 6.500664 (-6.361131) | 0.067537 / 0.075469 (-0.007932) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505572 / 1.841788 (-0.336216) | 22.859062 / 8.074308 (14.784754) | 15.044777 / 10.191392 (4.853385) | 0.169153 / 0.680424 (-0.511271) | 0.021027 / 0.534201 (-0.513174) | 0.447979 / 0.579283 (-0.131304) | 0.460676 / 0.434364 (0.026312) | 0.506327 / 0.540337 (-0.034010) | 0.737880 / 1.386936 (-0.649057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db7180eb7e3ebf52b9d1f2c6629db6d92d8a29ba \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003692 / 0.011008 (-0.007316) | 0.080606 / 0.038508 (0.042098) | 0.062014 / 0.023109 (0.038905) | 0.391886 / 0.275898 (0.115988) | 0.423978 / 0.323480 (0.100498) | 0.004968 / 0.007986 (-0.003017) | 0.002911 / 0.004328 (-0.001417) | 0.062867 / 0.004250 (0.058617) | 0.049493 / 0.037052 (0.012441) | 0.395656 / 0.258489 (0.137167) | 0.432406 / 0.293841 (0.138565) | 0.027242 / 0.128546 (-0.101304) | 0.007938 / 0.075646 (-0.067709) | 0.261703 / 0.419271 (-0.157569) | 0.045922 / 0.043533 (0.002389) | 0.391544 / 0.255139 (0.136405) | 0.417902 / 0.283200 (0.134703) | 0.021339 / 0.141683 (-0.120344) | 1.508391 / 1.452155 (0.056236) | 1.518970 / 1.492716 (0.026254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181159 / 0.018006 (0.163153) | 0.431402 / 0.000490 (0.430912) | 0.003849 / 0.000200 (0.003649) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024498 / 0.037411 (-0.012914) | 0.072758 / 0.014526 (0.058233) | 0.084910 / 0.176557 (-0.091646) | 0.148314 / 0.737135 (-0.588821) | 0.085212 / 0.296338 (-0.211126) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386693 / 0.215209 (0.171484) | 3.852652 / 2.077655 (1.774997) | 1.891758 / 1.504120 (0.387638) | 1.718793 / 1.541195 (0.177598) | 1.747595 / 1.468490 (0.279104) | 0.498593 / 4.584777 (-4.086184) | 3.057907 / 3.745712 (-0.687805) | 4.728449 / 5.269862 (-0.541413) | 2.966368 / 4.565676 (-1.599308) | 0.057538 / 0.424275 (-0.366737) | 0.006415 / 0.007607 (-0.001192) | 0.461652 / 0.226044 (0.235608) | 4.625944 / 2.268929 (2.357015) | 2.306938 / 55.444624 (-53.137686) | 1.974670 / 6.876477 (-4.901806) | 2.146327 / 2.142072 (0.004254) | 0.585033 / 4.805227 (-4.220195) | 0.125936 / 6.500664 (-6.374728) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263415 / 1.841788 (-0.578373) | 18.380651 / 8.074308 (10.306343) | 13.853410 / 10.191392 (3.662018) | 0.144674 / 0.680424 (-0.535749) | 0.016833 / 0.534201 (-0.517368) | 0.330812 / 0.579283 (-0.248471) | 0.357553 / 0.434364 (-0.076810) | 0.383529 / 0.540337 (-0.156809) | 0.558923 / 1.386936 (-0.828013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006074 / 0.011353 (-0.005278) | 0.003655 / 0.011008 (-0.007353) | 0.062981 / 0.038508 (0.024473) | 0.061457 / 0.023109 (0.038348) | 0.366471 / 0.275898 (0.090573) | 0.408463 / 0.323480 (0.084983) | 0.004854 / 0.007986 (-0.003132) | 0.002916 / 0.004328 (-0.001412) | 0.062745 / 0.004250 (0.058494) | 0.051136 / 0.037052 (0.014084) | 0.380313 / 0.258489 (0.121824) | 0.416945 / 0.293841 (0.123104) | 0.027228 / 0.128546 (-0.101318) | 0.008031 / 0.075646 (-0.067615) | 0.067941 / 0.419271 (-0.351331) | 0.042886 / 0.043533 (-0.000647) | 0.370112 / 0.255139 (0.114973) | 0.397111 / 0.283200 (0.113911) | 0.023063 / 0.141683 (-0.118620) | 1.476955 / 1.452155 (0.024800) | 1.534783 / 1.492716 (0.042066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231462 / 0.018006 (0.213456) | 0.439559 / 0.000490 (0.439069) | 0.000364 / 0.000200 (0.000164) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026925 / 0.037411 (-0.010486) | 0.079623 / 0.014526 (0.065097) | 0.088694 / 0.176557 (-0.087862) | 0.143163 / 0.737135 (-0.593972) | 0.089900 / 0.296338 (-0.206438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451429 / 0.215209 (0.236220) | 4.510723 / 2.077655 (2.433069) | 2.491853 / 1.504120 (0.987733) | 2.334670 / 1.541195 (0.793475) | 2.395519 / 1.468490 (0.927029) | 0.501369 / 4.584777 (-4.083408) | 3.014019 / 3.745712 (-0.731693) | 2.809199 / 5.269862 (-2.460662) | 1.842195 / 4.565676 (-2.723481) | 0.057675 / 0.424275 (-0.366600) | 0.006742 / 0.007607 (-0.000865) | 0.524402 / 0.226044 (0.298358) | 5.245296 / 2.268929 (2.976367) | 2.957990 / 55.444624 (-52.486634) | 2.649807 / 6.876477 (-4.226670) | 2.755909 / 2.142072 (0.613836) | 0.589610 / 4.805227 (-4.215617) | 0.125708 / 6.500664 (-6.374956) | 0.062237 / 0.075469 (-0.013232) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362758 / 1.841788 (-0.479030) | 18.343694 / 8.074308 (10.269386) | 13.621521 / 10.191392 (3.430129) | 0.128866 / 0.680424 (-0.551558) | 0.016608 / 0.534201 (-0.517593) | 0.333071 / 0.579283 (-0.246212) | 0.341917 / 0.434364 (-0.092447) | 0.381075 / 0.540337 (-0.159263) | 0.512485 / 1.386936 (-0.874451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab3f0165d4a2a8ab1aee1ebc4628893e17e27387 \"CML watermark\")\n", "I forgot to mention this in the initial comment, but only one public dataset (excluding gated) uses this method - `pg19`, which I just fixed.\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.004791 / 0.011008 (-0.006217) | 0.102596 / 0.038508 (0.064088) | 0.087678 / 0.023109 (0.064569) | 0.373858 / 0.275898 (0.097960) | 0.416643 / 0.323480 (0.093163) | 0.006147 / 0.007986 (-0.001839) | 0.003837 / 0.004328 (-0.000491) | 0.076706 / 0.004250 (0.072456) | 0.063449 / 0.037052 (0.026396) | 0.378392 / 0.258489 (0.119903) | 0.431768 / 0.293841 (0.137927) | 0.036648 / 0.128546 (-0.091898) | 0.010042 / 0.075646 (-0.065604) | 0.350277 / 0.419271 (-0.068995) | 0.062892 / 0.043533 (0.019359) | 0.376151 / 0.255139 (0.121012) | 0.420929 / 0.283200 (0.137729) | 0.027816 / 0.141683 (-0.113867) | 1.791607 / 1.452155 (0.339452) | 1.903045 / 1.492716 (0.410328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224688 / 0.018006 (0.206682) | 0.491941 / 0.000490 (0.491451) | 0.004482 / 0.000200 (0.004282) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033495 / 0.037411 (-0.003917) | 0.099855 / 0.014526 (0.085329) | 0.114593 / 0.176557 (-0.061964) | 0.190947 / 0.737135 (-0.546189) | 0.116202 / 0.296338 (-0.180136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488581 / 0.215209 (0.273372) | 4.869531 / 2.077655 (2.791876) | 2.527920 / 1.504120 (1.023800) | 2.340021 / 1.541195 (0.798826) | 2.432661 / 1.468490 (0.964171) | 0.569646 / 4.584777 (-4.015131) | 4.392036 / 3.745712 (0.646324) | 4.987253 / 5.269862 (-0.282608) | 2.866604 / 4.565676 (-1.699073) | 0.067393 / 0.424275 (-0.356882) | 0.008759 / 0.007607 (0.001152) | 0.584327 / 0.226044 (0.358283) | 5.853000 / 2.268929 (3.584072) | 3.206721 / 55.444624 (-52.237904) | 2.730867 / 6.876477 (-4.145610) | 2.944814 / 2.142072 (0.802742) | 0.703336 / 4.805227 (-4.101891) | 0.173985 / 6.500664 (-6.326679) | 0.075333 / 0.075469 (-0.000137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.519755 / 1.841788 (-0.322033) | 22.918038 / 8.074308 (14.843730) | 17.211160 / 10.191392 (7.019768) | 0.196941 / 0.680424 (-0.483483) | 0.021833 / 0.534201 (-0.512368) | 0.476835 / 0.579283 (-0.102448) | 0.464513 / 0.434364 (0.030149) | 0.559180 / 0.540337 (0.018843) | 0.748232 / 1.386936 (-0.638704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008461 / 0.011353 (-0.002892) | 0.004799 / 0.011008 (-0.006209) | 0.077466 / 0.038508 (0.038958) | 0.103562 / 0.023109 (0.080453) | 0.453661 / 0.275898 (0.177763) | 0.531126 / 0.323480 (0.207647) | 0.006618 / 0.007986 (-0.001367) | 0.004048 / 0.004328 (-0.000280) | 0.075446 / 0.004250 (0.071196) | 0.072815 / 0.037052 (0.035762) | 0.497145 / 0.258489 (0.238656) | 0.533828 / 0.293841 (0.239987) | 0.037657 / 0.128546 (-0.090890) | 0.010139 / 0.075646 (-0.065507) | 0.083759 / 0.419271 (-0.335512) | 0.061401 / 0.043533 (0.017868) | 0.441785 / 0.255139 (0.186646) | 0.491678 / 0.283200 (0.208479) | 0.033100 / 0.141683 (-0.108583) | 1.753612 / 1.452155 (0.301458) | 1.838956 / 1.492716 (0.346240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395023 / 0.018006 (0.377017) | 0.509362 / 0.000490 (0.508872) | 0.060742 / 0.000200 (0.060542) | 0.000545 / 0.000054 (0.000491) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039327 / 0.037411 (0.001916) | 0.117345 / 0.014526 (0.102819) | 0.124540 / 0.176557 (-0.052017) | 0.200743 / 0.737135 (-0.536392) | 0.126750 / 0.296338 (-0.169589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488597 / 0.215209 (0.273388) | 4.875534 / 2.077655 (2.797880) | 2.714364 / 1.504120 (1.210244) | 2.603707 / 1.541195 (1.062513) | 2.733547 / 1.468490 (1.265057) | 0.575183 / 4.584777 (-4.009594) | 4.126096 / 3.745712 (0.380384) | 3.853803 / 5.269862 (-1.416058) | 2.395160 / 4.565676 (-2.170516) | 0.067391 / 0.424275 (-0.356884) | 0.009108 / 0.007607 (0.001501) | 0.585865 / 0.226044 (0.359820) | 5.864878 / 2.268929 (3.595949) | 3.153369 / 55.444624 (-52.291256) | 2.759064 / 6.876477 (-4.117413) | 3.032489 / 2.142072 (0.890416) | 0.702615 / 4.805227 (-4.102613) | 0.160034 / 6.500664 (-6.340630) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595069 / 1.841788 (-0.246719) | 23.231191 / 8.074308 (15.156883) | 16.365137 / 10.191392 (6.173745) | 0.188360 / 0.680424 (-0.492063) | 0.021704 / 0.534201 (-0.512497) | 0.469996 / 0.579283 (-0.109287) | 0.463255 / 0.434364 (0.028891) | 0.560506 / 0.540337 (0.020169) | 0.751006 / 1.386936 (-0.635930) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#50d9a70c666ff46ff9974c47cedc77d9f88d6471 \"CML watermark\")\n", "@mariosasko How would you stream a split zip file with just [download_and_extract or download](https://github.com/huggingface/datasets/blob/main/src/datasets/download/download_manager.py#L353)? With download_custom, it is possible to combine a split zip file. Perhaps add an option in [download](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/builder_classes#datasets.DownloadManager.download) to combine split zips. This issue may apply to other multipart file-types.\r\n\r\nEdit - \r\nIn case asked why I use split zips, I haven't been able to upload zips larger than 50 GB to HuggingFace.\r\n\r\nEdit2 -\r\nIssue is [tackled](https://discuss.huggingface.co/t/download-custom-method-of-streamingdownloadmanager-not-implemented/28298/8) for split zips. " ]
2023-07-28T10:49:06
2023-08-21T17:51:34
2023-07-28T11:30:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6093", "html_url": "https://github.com/huggingface/datasets/pull/6093", "diff_url": "https://github.com/huggingface/datasets/pull/6093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6093.patch", "merged_at": "2023-07-28T11:30:02" }
Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead. We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been requests to implement the streaming version of this method on the forum, but the reason for this seems to be a tip in the docs that "promotes" this method (this PR removes it).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6093/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6092/comments
https://api.github.com/repos/huggingface/datasets/issues/6092/events
https://github.com/huggingface/datasets/pull/6092
1,826,111,806
PR_kwDODunzps5Wo1mh
6,092
Minor fix in `iter_files` for hidden files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007873 / 0.011353 (-0.003480) | 0.004585 / 0.011008 (-0.006423) | 0.101622 / 0.038508 (0.063114) | 0.092459 / 0.023109 (0.069350) | 0.365157 / 0.275898 (0.089259) | 0.405943 / 0.323480 (0.082463) | 0.006229 / 0.007986 (-0.001756) | 0.003811 / 0.004328 (-0.000518) | 0.073831 / 0.004250 (0.069580) | 0.065097 / 0.037052 (0.028045) | 0.378912 / 0.258489 (0.120423) | 0.422174 / 0.293841 (0.128333) | 0.036244 / 0.128546 (-0.092302) | 0.009677 / 0.075646 (-0.065970) | 0.345164 / 0.419271 (-0.074107) | 0.061632 / 0.043533 (0.018099) | 0.370350 / 0.255139 (0.115211) | 0.418245 / 0.283200 (0.135046) | 0.027272 / 0.141683 (-0.114411) | 1.774047 / 1.452155 (0.321892) | 1.880278 / 1.492716 (0.387562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217238 / 0.018006 (0.199231) | 0.489560 / 0.000490 (0.489071) | 0.004013 / 0.000200 (0.003813) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034139 / 0.037411 (-0.003272) | 0.103831 / 0.014526 (0.089305) | 0.114353 / 0.176557 (-0.062204) | 0.182034 / 0.737135 (-0.555102) | 0.116171 / 0.296338 (-0.180168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448658 / 0.215209 (0.233449) | 4.520849 / 2.077655 (2.443195) | 2.216121 / 1.504120 (0.712001) | 2.034596 / 1.541195 (0.493402) | 2.193216 / 1.468490 (0.724725) | 0.568166 / 4.584777 (-4.016611) | 4.133587 / 3.745712 (0.387875) | 4.641117 / 5.269862 (-0.628744) | 2.772913 / 4.565676 (-1.792764) | 0.067664 / 0.424275 (-0.356611) | 0.008719 / 0.007607 (0.001112) | 0.547723 / 0.226044 (0.321678) | 5.438325 / 2.268929 (3.169397) | 2.877667 / 55.444624 (-52.566958) | 2.477503 / 6.876477 (-4.398974) | 2.688209 / 2.142072 (0.546136) | 0.692593 / 4.805227 (-4.112634) | 0.154549 / 6.500664 (-6.346115) | 0.073286 / 0.075469 (-0.002183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610927 / 1.841788 (-0.230861) | 23.413345 / 8.074308 (15.339037) | 16.851819 / 10.191392 (6.660427) | 0.170076 / 0.680424 (-0.510348) | 0.021428 / 0.534201 (-0.512773) | 0.468184 / 0.579283 (-0.111099) | 0.491820 / 0.434364 (0.057456) | 0.553453 / 0.540337 (0.013115) | 0.762303 / 1.386936 (-0.624633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008033 / 0.011353 (-0.003320) | 0.004638 / 0.011008 (-0.006370) | 0.077044 / 0.038508 (0.038536) | 0.096529 / 0.023109 (0.073420) | 0.428735 / 0.275898 (0.152837) | 0.477303 / 0.323480 (0.153823) | 0.006040 / 0.007986 (-0.001946) | 0.003808 / 0.004328 (-0.000521) | 0.076042 / 0.004250 (0.071791) | 0.066123 / 0.037052 (0.029071) | 0.445482 / 0.258489 (0.186993) | 0.481350 / 0.293841 (0.187509) | 0.036951 / 0.128546 (-0.091595) | 0.009944 / 0.075646 (-0.065703) | 0.082731 / 0.419271 (-0.336541) | 0.057490 / 0.043533 (0.013958) | 0.432668 / 0.255139 (0.177529) | 0.461146 / 0.283200 (0.177947) | 0.027330 / 0.141683 (-0.114353) | 1.784195 / 1.452155 (0.332040) | 1.834776 / 1.492716 (0.342059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254104 / 0.018006 (0.236097) | 0.475810 / 0.000490 (0.475321) | 0.000459 / 0.000200 (0.000259) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037058 / 0.037411 (-0.000353) | 0.114962 / 0.014526 (0.100436) | 0.123725 / 0.176557 (-0.052832) | 0.188885 / 0.737135 (-0.548251) | 0.125668 / 0.296338 (-0.170670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492627 / 0.215209 (0.277418) | 4.900625 / 2.077655 (2.822970) | 2.546349 / 1.504120 (1.042229) | 2.360350 / 1.541195 (0.819155) | 2.477975 / 1.468490 (1.009485) | 0.574042 / 4.584777 (-4.010735) | 4.408414 / 3.745712 (0.662702) | 3.836640 / 5.269862 (-1.433222) | 2.438450 / 4.565676 (-2.127227) | 0.067706 / 0.424275 (-0.356569) | 0.009165 / 0.007607 (0.001558) | 0.580313 / 0.226044 (0.354269) | 5.798211 / 2.268929 (3.529283) | 3.098480 / 55.444624 (-52.346145) | 2.740180 / 6.876477 (-4.136296) | 2.984548 / 2.142072 (0.842476) | 0.702550 / 4.805227 (-4.102677) | 0.158248 / 6.500664 (-6.342416) | 0.073999 / 0.075469 (-0.001470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.636034 / 1.841788 (-0.205754) | 24.068000 / 8.074308 (15.993692) | 17.123987 / 10.191392 (6.932595) | 0.210101 / 0.680424 (-0.470323) | 0.022555 / 0.534201 (-0.511646) | 0.509354 / 0.579283 (-0.069929) | 0.540739 / 0.434364 (0.106375) | 0.546048 / 0.540337 (0.005711) | 0.719155 / 1.386936 (-0.667781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40530382ba98f54445de8820943b1236d4a4704f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007342 / 0.011353 (-0.004010) | 0.004579 / 0.011008 (-0.006429) | 0.087050 / 0.038508 (0.048542) | 0.089001 / 0.023109 (0.065892) | 0.307319 / 0.275898 (0.031421) | 0.377573 / 0.323480 (0.054093) | 0.006472 / 0.007986 (-0.001514) | 0.004287 / 0.004328 (-0.000041) | 0.067226 / 0.004250 (0.062976) | 0.063147 / 0.037052 (0.026094) | 0.314541 / 0.258489 (0.056052) | 0.369919 / 0.293841 (0.076078) | 0.031283 / 0.128546 (-0.097263) | 0.009175 / 0.075646 (-0.066471) | 0.289211 / 0.419271 (-0.130061) | 0.053444 / 0.043533 (0.009911) | 0.307308 / 0.255139 (0.052169) | 0.346221 / 0.283200 (0.063021) | 0.027948 / 0.141683 (-0.113735) | 1.475177 / 1.452155 (0.023022) | 1.575971 / 1.492716 (0.083255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291092 / 0.018006 (0.273086) | 0.696951 / 0.000490 (0.696461) | 0.005211 / 0.000200 (0.005011) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031787 / 0.037411 (-0.005625) | 0.084382 / 0.014526 (0.069857) | 0.106474 / 0.176557 (-0.070083) | 0.161472 / 0.737135 (-0.575663) | 0.108650 / 0.296338 (-0.187688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379656 / 0.215209 (0.164447) | 3.784072 / 2.077655 (1.706417) | 1.826580 / 1.504120 (0.322460) | 1.654916 / 1.541195 (0.113721) | 1.730698 / 1.468490 (0.262208) | 0.478003 / 4.584777 (-4.106774) | 3.564920 / 3.745712 (-0.180792) | 5.824873 / 5.269862 (0.555012) | 3.454563 / 4.565676 (-1.111113) | 0.056646 / 0.424275 (-0.367629) | 0.007410 / 0.007607 (-0.000197) | 0.461781 / 0.226044 (0.235737) | 4.600928 / 2.268929 (2.331999) | 2.351887 / 55.444624 (-53.092738) | 1.986470 / 6.876477 (-4.890007) | 2.311623 / 2.142072 (0.169551) | 0.571247 / 4.805227 (-4.233980) | 0.132191 / 6.500664 (-6.368473) | 0.059943 / 0.075469 (-0.015526) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253142 / 1.841788 (-0.588646) | 21.294983 / 8.074308 (13.220675) | 14.522429 / 10.191392 (4.331037) | 0.166663 / 0.680424 (-0.513761) | 0.019694 / 0.534201 (-0.514507) | 0.395908 / 0.579283 (-0.183375) | 0.413283 / 0.434364 (-0.021081) | 0.457739 / 0.540337 (-0.082599) | 0.664361 / 1.386936 (-0.722575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007228 / 0.011353 (-0.004124) | 0.004941 / 0.011008 (-0.006067) | 0.065381 / 0.038508 (0.026873) | 0.090790 / 0.023109 (0.067681) | 0.391315 / 0.275898 (0.115417) | 0.416518 / 0.323480 (0.093038) | 0.007015 / 0.007986 (-0.000970) | 0.004417 / 0.004328 (0.000089) | 0.067235 / 0.004250 (0.062985) | 0.068092 / 0.037052 (0.031039) | 0.403031 / 0.258489 (0.144542) | 0.434013 / 0.293841 (0.140172) | 0.032004 / 0.128546 (-0.096542) | 0.009242 / 0.075646 (-0.066404) | 0.071222 / 0.419271 (-0.348050) | 0.054207 / 0.043533 (0.010674) | 0.386198 / 0.255139 (0.131059) | 0.404350 / 0.283200 (0.121150) | 0.036284 / 0.141683 (-0.105399) | 1.488814 / 1.452155 (0.036660) | 1.587785 / 1.492716 (0.095069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313760 / 0.018006 (0.295754) | 0.747778 / 0.000490 (0.747289) | 0.003307 / 0.000200 (0.003107) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034321 / 0.037411 (-0.003090) | 0.088266 / 0.014526 (0.073740) | 0.112874 / 0.176557 (-0.063682) | 0.171554 / 0.737135 (-0.565581) | 0.111356 / 0.296338 (-0.184982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422624 / 0.215209 (0.207415) | 4.212079 / 2.077655 (2.134425) | 2.242742 / 1.504120 (0.738622) | 2.072555 / 1.541195 (0.531360) | 2.192648 / 1.468490 (0.724158) | 0.488214 / 4.584777 (-4.096563) | 3.597013 / 3.745712 (-0.148699) | 3.477556 / 5.269862 (-1.792305) | 2.184340 / 4.565676 (-2.381337) | 0.057170 / 0.424275 (-0.367105) | 0.007772 / 0.007607 (0.000165) | 0.499455 / 0.226044 (0.273411) | 4.988953 / 2.268929 (2.720024) | 2.797894 / 55.444624 (-52.646731) | 2.402215 / 6.876477 (-4.474262) | 2.725069 / 2.142072 (0.582997) | 0.596213 / 4.805227 (-4.209014) | 0.136564 / 6.500664 (-6.364100) | 0.061799 / 0.075469 (-0.013670) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.360739 / 1.841788 (-0.481049) | 21.846457 / 8.074308 (13.772149) | 14.568842 / 10.191392 (4.377450) | 0.168980 / 0.680424 (-0.511444) | 0.018795 / 0.534201 (-0.515406) | 0.396173 / 0.579283 (-0.183110) | 0.418651 / 0.434364 (-0.015713) | 0.480042 / 0.540337 (-0.060295) | 0.650803 / 1.386936 (-0.736133) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d460304487d4daab0a64ca0ca707e896367ca1 \"CML watermark\")\n" ]
2023-07-28T09:50:12
2023-07-28T10:59:28
2023-07-28T10:50:10
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6092", "html_url": "https://github.com/huggingface/datasets/pull/6092", "diff_url": "https://github.com/huggingface/datasets/pull/6092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6092.patch", "merged_at": "2023-07-28T10:50:09" }
Fix #6090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6092/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6091/comments
https://api.github.com/repos/huggingface/datasets/issues/6091/events
https://github.com/huggingface/datasets/pull/6091
1,826,086,487
PR_kwDODunzps5Wov9Q
6,091
Bump fsspec from 2021.11.1 to 2022.3.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004077 / 0.011008 (-0.006931) | 0.084905 / 0.038508 (0.046397) | 0.074004 / 0.023109 (0.050895) | 0.315968 / 0.275898 (0.040070) | 0.351594 / 0.323480 (0.028114) | 0.005623 / 0.007986 (-0.002362) | 0.003476 / 0.004328 (-0.000852) | 0.065089 / 0.004250 (0.060839) | 0.054683 / 0.037052 (0.017631) | 0.314983 / 0.258489 (0.056494) | 0.371776 / 0.293841 (0.077935) | 0.031727 / 0.128546 (-0.096819) | 0.008786 / 0.075646 (-0.066860) | 0.289905 / 0.419271 (-0.129367) | 0.053340 / 0.043533 (0.009807) | 0.311802 / 0.255139 (0.056663) | 0.351927 / 0.283200 (0.068727) | 0.024453 / 0.141683 (-0.117229) | 1.491727 / 1.452155 (0.039572) | 1.585027 / 1.492716 (0.092310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238757 / 0.018006 (0.220750) | 0.557691 / 0.000490 (0.557202) | 0.005158 / 0.000200 (0.004958) | 0.000204 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028435 / 0.037411 (-0.008977) | 0.082219 / 0.014526 (0.067693) | 0.096932 / 0.176557 (-0.079625) | 0.153802 / 0.737135 (-0.583333) | 0.098338 / 0.296338 (-0.198001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383448 / 0.215209 (0.168238) | 3.816074 / 2.077655 (1.738420) | 1.835111 / 1.504120 (0.330991) | 1.662326 / 1.541195 (0.121131) | 1.720202 / 1.468490 (0.251712) | 0.483107 / 4.584777 (-4.101669) | 3.648528 / 3.745712 (-0.097184) | 4.020929 / 5.269862 (-1.248932) | 2.433141 / 4.565676 (-2.132536) | 0.057081 / 0.424275 (-0.367194) | 0.007303 / 0.007607 (-0.000304) | 0.461366 / 0.226044 (0.235322) | 4.609090 / 2.268929 (2.340162) | 2.355940 / 55.444624 (-53.088684) | 1.989833 / 6.876477 (-4.886644) | 2.201451 / 2.142072 (0.059378) | 0.586156 / 4.805227 (-4.219071) | 0.133486 / 6.500664 (-6.367178) | 0.060062 / 0.075469 (-0.015407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247845 / 1.841788 (-0.593942) | 19.624252 / 8.074308 (11.549944) | 14.305975 / 10.191392 (4.114583) | 0.168687 / 0.680424 (-0.511737) | 0.018075 / 0.534201 (-0.516126) | 0.393859 / 0.579283 (-0.185424) | 0.407272 / 0.434364 (-0.027092) | 0.463760 / 0.540337 (-0.076578) | 0.629930 / 1.386936 (-0.757006) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006760 / 0.011353 (-0.004593) | 0.004345 / 0.011008 (-0.006663) | 0.064379 / 0.038508 (0.025871) | 0.078295 / 0.023109 (0.055186) | 0.364532 / 0.275898 (0.088633) | 0.395852 / 0.323480 (0.072372) | 0.005659 / 0.007986 (-0.002327) | 0.003515 / 0.004328 (-0.000813) | 0.065030 / 0.004250 (0.060780) | 0.059950 / 0.037052 (0.022898) | 0.375420 / 0.258489 (0.116931) | 0.411579 / 0.293841 (0.117738) | 0.031575 / 0.128546 (-0.096972) | 0.008737 / 0.075646 (-0.066910) | 0.070350 / 0.419271 (-0.348922) | 0.050607 / 0.043533 (0.007075) | 0.359785 / 0.255139 (0.104646) | 0.382638 / 0.283200 (0.099438) | 0.025533 / 0.141683 (-0.116150) | 1.564379 / 1.452155 (0.112225) | 1.620642 / 1.492716 (0.127925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212779 / 0.018006 (0.194773) | 0.563827 / 0.000490 (0.563337) | 0.003767 / 0.000200 (0.003567) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030275 / 0.037411 (-0.007136) | 0.088108 / 0.014526 (0.073582) | 0.102454 / 0.176557 (-0.074103) | 0.156107 / 0.737135 (-0.581028) | 0.103961 / 0.296338 (-0.192378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421395 / 0.215209 (0.206186) | 4.204935 / 2.077655 (2.127280) | 2.144929 / 1.504120 (0.640809) | 1.999341 / 1.541195 (0.458147) | 2.066966 / 1.468490 (0.598476) | 0.486135 / 4.584777 (-4.098642) | 3.628139 / 3.745712 (-0.117573) | 5.652683 / 5.269862 (0.382821) | 3.216721 / 4.565676 (-1.348956) | 0.057513 / 0.424275 (-0.366762) | 0.007553 / 0.007607 (-0.000055) | 0.494470 / 0.226044 (0.268426) | 4.949343 / 2.268929 (2.680414) | 2.654222 / 55.444624 (-52.790402) | 2.322257 / 6.876477 (-4.554220) | 2.555633 / 2.142072 (0.413561) | 0.588355 / 4.805227 (-4.216872) | 0.134481 / 6.500664 (-6.366183) | 0.062415 / 0.075469 (-0.013054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377578 / 1.841788 (-0.464209) | 19.805201 / 8.074308 (11.730893) | 14.128536 / 10.191392 (3.937144) | 0.164343 / 0.680424 (-0.516081) | 0.018553 / 0.534201 (-0.515648) | 0.398191 / 0.579283 (-0.181093) | 0.414268 / 0.434364 (-0.020096) | 0.462270 / 0.540337 (-0.078068) | 0.608497 / 1.386936 (-0.778439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3af05ba487f361fae90a4c80af72de5c4ed70162 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006966 / 0.011353 (-0.004387) | 0.004339 / 0.011008 (-0.006669) | 0.086682 / 0.038508 (0.048174) | 0.086143 / 0.023109 (0.063034) | 0.316106 / 0.275898 (0.040208) | 0.351422 / 0.323480 (0.027942) | 0.005916 / 0.007986 (-0.002069) | 0.003630 / 0.004328 (-0.000698) | 0.066980 / 0.004250 (0.062730) | 0.060031 / 0.037052 (0.022979) | 0.317487 / 0.258489 (0.058998) | 0.356280 / 0.293841 (0.062439) | 0.031816 / 0.128546 (-0.096730) | 0.008797 / 0.075646 (-0.066849) | 0.289848 / 0.419271 (-0.129424) | 0.055431 / 0.043533 (0.011898) | 0.318881 / 0.255139 (0.063742) | 0.332315 / 0.283200 (0.049116) | 0.025946 / 0.141683 (-0.115737) | 1.472904 / 1.452155 (0.020749) | 1.577973 / 1.492716 (0.085257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239056 / 0.018006 (0.221050) | 0.565406 / 0.000490 (0.564917) | 0.003606 / 0.000200 (0.003406) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029771 / 0.037411 (-0.007640) | 0.085534 / 0.014526 (0.071008) | 0.107008 / 0.176557 (-0.069548) | 0.631583 / 0.737135 (-0.105552) | 0.104210 / 0.296338 (-0.192128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390675 / 0.215209 (0.175466) | 3.898746 / 2.077655 (1.821091) | 1.933048 / 1.504120 (0.428928) | 1.792162 / 1.541195 (0.250967) | 1.958045 / 1.468490 (0.489555) | 0.488632 / 4.584777 (-4.096144) | 3.696306 / 3.745712 (-0.049406) | 3.454600 / 5.269862 (-1.815262) | 2.176292 / 4.565676 (-2.389385) | 0.057617 / 0.424275 (-0.366658) | 0.007603 / 0.007607 (-0.000004) | 0.467843 / 0.226044 (0.241798) | 4.672928 / 2.268929 (2.404000) | 2.441096 / 55.444624 (-53.003529) | 2.133506 / 6.876477 (-4.742970) | 2.431167 / 2.142072 (0.289095) | 0.588567 / 4.805227 (-4.216661) | 0.136070 / 6.500664 (-6.364594) | 0.063395 / 0.075469 (-0.012074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255003 / 1.841788 (-0.586784) | 20.587656 / 8.074308 (12.513348) | 15.147817 / 10.191392 (4.956425) | 0.152039 / 0.680424 (-0.528384) | 0.018815 / 0.534201 (-0.515386) | 0.397458 / 0.579283 (-0.181825) | 0.431433 / 0.434364 (-0.002931) | 0.487890 / 0.540337 (-0.052448) | 0.675367 / 1.386936 (-0.711569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007209 / 0.011353 (-0.004144) | 0.004372 / 0.011008 (-0.006636) | 0.066288 / 0.038508 (0.027780) | 0.091776 / 0.023109 (0.068667) | 0.390724 / 0.275898 (0.114826) | 0.434711 / 0.323480 (0.111231) | 0.005790 / 0.007986 (-0.002196) | 0.003562 / 0.004328 (-0.000767) | 0.066155 / 0.004250 (0.061904) | 0.062459 / 0.037052 (0.025406) | 0.406622 / 0.258489 (0.148133) | 0.433976 / 0.293841 (0.140135) | 0.032590 / 0.128546 (-0.095957) | 0.008856 / 0.075646 (-0.066790) | 0.072327 / 0.419271 (-0.346945) | 0.049958 / 0.043533 (0.006426) | 0.400164 / 0.255139 (0.145025) | 0.413339 / 0.283200 (0.130139) | 0.025283 / 0.141683 (-0.116399) | 1.487668 / 1.452155 (0.035514) | 1.537679 / 1.492716 (0.044962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257814 / 0.018006 (0.239808) | 0.571741 / 0.000490 (0.571251) | 0.000412 / 0.000200 (0.000212) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.094533 / 0.014526 (0.080008) | 0.105876 / 0.176557 (-0.070680) | 0.158675 / 0.737135 (-0.578460) | 0.107790 / 0.296338 (-0.188548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425796 / 0.215209 (0.210587) | 4.229159 / 2.077655 (2.151505) | 2.239613 / 1.504120 (0.735493) | 2.073830 / 1.541195 (0.532635) | 2.185508 / 1.468490 (0.717018) | 0.483984 / 4.584777 (-4.100793) | 3.645575 / 3.745712 (-0.100137) | 3.454767 / 5.269862 (-1.815095) | 2.141387 / 4.565676 (-2.424290) | 0.057570 / 0.424275 (-0.366705) | 0.007901 / 0.007607 (0.000294) | 0.501160 / 0.226044 (0.275116) | 5.012283 / 2.268929 (2.743355) | 2.701267 / 55.444624 (-52.743357) | 2.465409 / 6.876477 (-4.411068) | 2.696812 / 2.142072 (0.554739) | 0.587160 / 4.805227 (-4.218067) | 0.134175 / 6.500664 (-6.366489) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345632 / 1.841788 (-0.496155) | 21.077279 / 8.074308 (13.002971) | 14.700826 / 10.191392 (4.509434) | 0.156191 / 0.680424 (-0.524233) | 0.018991 / 0.534201 (-0.515210) | 0.400413 / 0.579283 (-0.178870) | 0.420597 / 0.434364 (-0.013767) | 0.486534 / 0.540337 (-0.053804) | 0.646606 / 1.386936 (-0.740330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bb8fabb135ca8adf47151ad3de050e3a258ccab \"CML watermark\")\n" ]
2023-07-28T09:37:15
2023-07-28T10:16:11
2023-07-28T10:07:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6091", "html_url": "https://github.com/huggingface/datasets/pull/6091", "diff_url": "https://github.com/huggingface/datasets/pull/6091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6091.patch", "merged_at": "2023-07-28T10:07:02" }
Fix https://github.com/huggingface/datasets/issues/6087 (Colab installs 2023.6.0, so we should be good)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6091/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6090/comments
https://api.github.com/repos/huggingface/datasets/issues/6090/events
https://github.com/huggingface/datasets/issues/6090
1,825,865,043
I_kwDODunzps5s1H1T
6,090
FilesIterable skips all the files after a hidden file
{ "login": "dkrivosic", "id": 10785413, "node_id": "MDQ6VXNlcjEwNzg1NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/10785413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkrivosic", "html_url": "https://github.com/dkrivosic", "followers_url": "https://api.github.com/users/dkrivosic/followers", "following_url": "https://api.github.com/users/dkrivosic/following{/other_user}", "gists_url": "https://api.github.com/users/dkrivosic/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkrivosic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkrivosic/subscriptions", "organizations_url": "https://api.github.com/users/dkrivosic/orgs", "repos_url": "https://api.github.com/users/dkrivosic/repos", "events_url": "https://api.github.com/users/dkrivosic/events{/privacy}", "received_events_url": "https://api.github.com/users/dkrivosic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting. We've merged a PR with a fix." ]
2023-07-28T07:25:57
2023-07-28T10:51:14
2023-07-28T10:50:11
NONE
null
null
null
### Describe the bug When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file. The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`. ### Steps to reproduce the bug https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8- ### Expected behavior The script should print all the files except the hidden one. ### Environment info - `datasets` version: 2.14.1 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6090/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6089/comments
https://api.github.com/repos/huggingface/datasets/issues/6089/events
https://github.com/huggingface/datasets/issues/6089
1,825,761,476
I_kwDODunzps5s0ujE
6,089
AssertionError: daemonic processes are not allowed to have children
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).", "> We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).\r\n\r\nGreat! Download takes more time than extract, multiple threads can download in parallel, which can speed up a lot." ]
2023-07-28T06:04:00
2023-07-31T02:34:02
null
NONE
null
null
null
### Describe the bug When I load_dataset with num_proc > 0 in a deamon process, I got an error: ```python File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract return self.extract(self.download(url_or_urls)) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download downloaded_path_or_paths = map_nested( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map return _map_with_multiprocessing_pool( ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool with Pool(num_proc, initargs=initargs, initializer=initializer) as pool: ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__ self._repopulate_pool() ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, ^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static w.start() File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^ AssertionError: daemonic processes are not allowed to have children ``` The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process. ### Steps to reproduce the bug 1. start a deamon process 2. run load_dataset with num_proc > 0 ### Expected behavior No error. ### Environment info Python 3.11.4 datasets latest master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6089/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6088/comments
https://api.github.com/repos/huggingface/datasets/issues/6088/events
https://github.com/huggingface/datasets/issues/6088
1,825,665,235
I_kwDODunzps5s0XDT
6,088
Loading local data files initiates web requests
{ "login": "lytning98", "id": 23375707, "node_id": "MDQ6VXNlcjIzMzc1NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lytning98", "html_url": "https://github.com/lytning98", "followers_url": "https://api.github.com/users/lytning98/followers", "following_url": "https://api.github.com/users/lytning98/following{/other_user}", "gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}", "starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lytning98/subscriptions", "organizations_url": "https://api.github.com/users/lytning98/orgs", "repos_url": "https://api.github.com/users/lytning98/repos", "events_url": "https://api.github.com/users/lytning98/events{/privacy}", "received_events_url": "https://api.github.com/users/lytning98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-07-28T04:06:26
2023-07-28T05:02:22
2023-07-28T05:02:22
NONE
null
null
null
As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by ```python # Load a JSON file from datasets import load_dataset ds = load_dataset('json', data_files='path/to/local/my_dataset.json') ``` But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows ``` in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode) 940 self.download_config = download_config 941 self.download_mode = download_mode --> 942 increase_load_count(name, resource_type="dataset") ``` I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6088/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6087/comments
https://api.github.com/repos/huggingface/datasets/issues/6087/events
https://github.com/huggingface/datasets/issues/6087
1,825,133,741
I_kwDODunzps5syVSt
6,087
fsspec dependency is set too low
{ "login": "iXce", "id": 1085885, "node_id": "MDQ6VXNlcjEwODU4ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1085885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iXce", "html_url": "https://github.com/iXce", "followers_url": "https://api.github.com/users/iXce/followers", "following_url": "https://api.github.com/users/iXce/following{/other_user}", "gists_url": "https://api.github.com/users/iXce/gists{/gist_id}", "starred_url": "https://api.github.com/users/iXce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iXce/subscriptions", "organizations_url": "https://api.github.com/users/iXce/orgs", "repos_url": "https://api.github.com/users/iXce/repos", "events_url": "https://api.github.com/users/iXce/events{/privacy}", "received_events_url": "https://api.github.com/users/iXce/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for reporting! A PR with a fix has just been merged." ]
2023-07-27T20:08:22
2023-07-28T10:07:56
2023-07-28T10:07:03
NONE
null
null
null
### Describe the bug fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: https://github.com/fsspec/filesystem_spec/commit/9577c8a482eb0a69092913b81580942a68d66a76#diff-906155c7e926a9ff58b9f23369bb513b09b445f5b0f41fa2a84015d0b471c68cR180), however the dependency is set to 2021.11.1 https://github.com/huggingface/datasets/blob/main/setup.py#L129 ### Steps to reproduce the bug 1. Install fsspec==2021.11.1 2. Install latest datasets==2.14.1 3. Import datasets, import fails due to lack of `fsspec.callbacks.TqdmCallback` ### Expected behavior No import issue ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6087/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6086/comments
https://api.github.com/repos/huggingface/datasets/issues/6086/events
https://github.com/huggingface/datasets/issues/6086
1,825,009,268
I_kwDODunzps5sx250
6,086
Support `fsspec` in `Dataset.to_<format>` methods
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mariosasko unless someone's already working on it, I guess I can tackle it!", "Hi! Sure, feel free to tackle this.", "#self-assign", "I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.DataFrame` and `to_sql` just inserts into a SQL DB, is that right?" ]
2023-07-27T19:08:37
2023-07-28T15:28:26
null
CONTRIBUTOR
null
null
null
Supporting this should be fairly easy. Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6086/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6086/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6085/comments
https://api.github.com/repos/huggingface/datasets/issues/6085/events
https://github.com/huggingface/datasets/pull/6085
1,824,985,188
PR_kwDODunzps5WlAyA
6,085
Fix `fsspec` download
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006031 / 0.011353 (-0.005322) | 0.003579 / 0.011008 (-0.007429) | 0.080862 / 0.038508 (0.042354) | 0.056660 / 0.023109 (0.033551) | 0.388285 / 0.275898 (0.112387) | 0.422270 / 0.323480 (0.098790) | 0.004651 / 0.007986 (-0.003335) | 0.002895 / 0.004328 (-0.001433) | 0.062767 / 0.004250 (0.058517) | 0.046491 / 0.037052 (0.009438) | 0.389918 / 0.258489 (0.131428) | 0.434650 / 0.293841 (0.140809) | 0.027265 / 0.128546 (-0.101281) | 0.007946 / 0.075646 (-0.067701) | 0.261207 / 0.419271 (-0.158065) | 0.045057 / 0.043533 (0.001525) | 0.391977 / 0.255139 (0.136838) | 0.418525 / 0.283200 (0.135326) | 0.020705 / 0.141683 (-0.120978) | 1.459271 / 1.452155 (0.007116) | 1.516935 / 1.492716 (0.024218) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174659 / 0.018006 (0.156653) | 0.429627 / 0.000490 (0.429137) | 0.003714 / 0.000200 (0.003514) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023255 / 0.037411 (-0.014156) | 0.073463 / 0.014526 (0.058937) | 0.083000 / 0.176557 (-0.093557) | 0.146704 / 0.737135 (-0.590431) | 0.084419 / 0.296338 (-0.211919) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392222 / 0.215209 (0.177013) | 3.902620 / 2.077655 (1.824966) | 1.903056 / 1.504120 (0.398936) | 1.753423 / 1.541195 (0.212228) | 1.874547 / 1.468490 (0.406057) | 0.495947 / 4.584777 (-4.088829) | 3.084680 / 3.745712 (-0.661032) | 4.235064 / 5.269862 (-1.034797) | 2.626840 / 4.565676 (-1.938837) | 0.057273 / 0.424275 (-0.367002) | 0.006457 / 0.007607 (-0.001150) | 0.466018 / 0.226044 (0.239974) | 4.648264 / 2.268929 (2.379335) | 2.520293 / 55.444624 (-52.924331) | 2.339393 / 6.876477 (-4.537083) | 2.538848 / 2.142072 (0.396775) | 0.592018 / 4.805227 (-4.213210) | 0.125041 / 6.500664 (-6.375623) | 0.061038 / 0.075469 (-0.014431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244285 / 1.841788 (-0.597503) | 18.411576 / 8.074308 (10.337268) | 13.850100 / 10.191392 (3.658708) | 0.131904 / 0.680424 (-0.548520) | 0.016824 / 0.534201 (-0.517377) | 0.328931 / 0.579283 (-0.250352) | 0.364801 / 0.434364 (-0.069563) | 0.376298 / 0.540337 (-0.164039) | 0.525045 / 1.386936 (-0.861891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006059 / 0.011353 (-0.005294) | 0.003693 / 0.011008 (-0.007315) | 0.062982 / 0.038508 (0.024473) | 0.062155 / 0.023109 (0.039046) | 0.389467 / 0.275898 (0.113568) | 0.437046 / 0.323480 (0.113566) | 0.004823 / 0.007986 (-0.003163) | 0.002935 / 0.004328 (-0.001393) | 0.062679 / 0.004250 (0.058429) | 0.049676 / 0.037052 (0.012623) | 0.418054 / 0.258489 (0.159565) | 0.442467 / 0.293841 (0.148626) | 0.027652 / 0.128546 (-0.100895) | 0.008146 / 0.075646 (-0.067501) | 0.069414 / 0.419271 (-0.349858) | 0.042884 / 0.043533 (-0.000649) | 0.387167 / 0.255139 (0.132028) | 0.418684 / 0.283200 (0.135484) | 0.022419 / 0.141683 (-0.119264) | 1.460606 / 1.452155 (0.008451) | 1.514204 / 1.492716 (0.021487) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200523 / 0.018006 (0.182517) | 0.415970 / 0.000490 (0.415481) | 0.003202 / 0.000200 (0.003002) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025836 / 0.037411 (-0.011575) | 0.078859 / 0.014526 (0.064333) | 0.088523 / 0.176557 (-0.088034) | 0.141572 / 0.737135 (-0.595563) | 0.090258 / 0.296338 (-0.206080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416548 / 0.215209 (0.201339) | 4.155278 / 2.077655 (2.077623) | 2.126683 / 1.504120 (0.622563) | 1.963762 / 1.541195 (0.422568) | 2.029018 / 1.468490 (0.560528) | 0.499005 / 4.584777 (-4.085772) | 3.063503 / 3.745712 (-0.682209) | 4.250800 / 5.269862 (-1.019061) | 2.642634 / 4.565676 (-1.923043) | 0.057815 / 0.424275 (-0.366460) | 0.006784 / 0.007607 (-0.000823) | 0.492481 / 0.226044 (0.266437) | 4.914306 / 2.268929 (2.645377) | 2.601582 / 55.444624 (-52.843042) | 2.337863 / 6.876477 (-4.538614) | 2.462854 / 2.142072 (0.320782) | 0.593738 / 4.805227 (-4.211489) | 0.127030 / 6.500664 (-6.373634) | 0.064206 / 0.075469 (-0.011263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.326919 / 1.841788 (-0.514868) | 18.728929 / 8.074308 (10.654621) | 13.903681 / 10.191392 (3.712289) | 0.162670 / 0.680424 (-0.517754) | 0.016913 / 0.534201 (-0.517288) | 0.337504 / 0.579283 (-0.241779) | 0.339786 / 0.434364 (-0.094577) | 0.384955 / 0.540337 (-0.155383) | 0.514358 / 1.386936 (-0.872578) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5c31b492c45e01c6f4593ada2b84517a75a5c7c \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6085). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007610 / 0.011353 (-0.003743) | 0.004616 / 0.011008 (-0.006392) | 0.100330 / 0.038508 (0.061821) | 0.084450 / 0.023109 (0.061341) | 0.386610 / 0.275898 (0.110712) | 0.418479 / 0.323480 (0.094999) | 0.006085 / 0.007986 (-0.001900) | 0.003800 / 0.004328 (-0.000529) | 0.076248 / 0.004250 (0.071997) | 0.065175 / 0.037052 (0.028122) | 0.387154 / 0.258489 (0.128665) | 0.425484 / 0.293841 (0.131643) | 0.035946 / 0.128546 (-0.092601) | 0.009901 / 0.075646 (-0.065745) | 0.343015 / 0.419271 (-0.076256) | 0.060965 / 0.043533 (0.017432) | 0.390585 / 0.255139 (0.135446) | 0.405873 / 0.283200 (0.122673) | 0.026929 / 0.141683 (-0.114754) | 1.767916 / 1.452155 (0.315761) | 1.893431 / 1.492716 (0.400715) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237888 / 0.018006 (0.219882) | 0.503949 / 0.000490 (0.503459) | 0.004769 / 0.000200 (0.004570) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031553 / 0.037411 (-0.005859) | 0.096950 / 0.014526 (0.082424) | 0.110374 / 0.176557 (-0.066183) | 0.176754 / 0.737135 (-0.560381) | 0.111703 / 0.296338 (-0.184635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449232 / 0.215209 (0.234023) | 4.510247 / 2.077655 (2.432592) | 2.188547 / 1.504120 (0.684427) | 2.007530 / 1.541195 (0.466335) | 2.095650 / 1.468490 (0.627160) | 0.563262 / 4.584777 (-4.021515) | 4.062412 / 3.745712 (0.316700) | 6.338350 / 5.269862 (1.068489) | 3.844669 / 4.565676 (-0.721008) | 0.064517 / 0.424275 (-0.359758) | 0.008536 / 0.007607 (0.000929) | 0.553872 / 0.226044 (0.327828) | 5.530311 / 2.268929 (3.261383) | 2.835109 / 55.444624 (-52.609516) | 2.493900 / 6.876477 (-4.382577) | 2.728412 / 2.142072 (0.586340) | 0.680161 / 4.805227 (-4.125066) | 0.155831 / 6.500664 (-6.344833) | 0.070359 / 0.075469 (-0.005110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.504852 / 1.841788 (-0.336936) | 22.806335 / 8.074308 (14.732027) | 16.598386 / 10.191392 (6.406994) | 0.207857 / 0.680424 (-0.472566) | 0.021425 / 0.534201 (-0.512776) | 0.474069 / 0.579283 (-0.105214) | 0.472263 / 0.434364 (0.037899) | 0.542195 / 0.540337 (0.001858) | 0.782871 / 1.386936 (-0.604065) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007443 / 0.011353 (-0.003910) | 0.004465 / 0.011008 (-0.006544) | 0.076268 / 0.038508 (0.037759) | 0.086607 / 0.023109 (0.063498) | 0.443295 / 0.275898 (0.167397) | 0.472819 / 0.323480 (0.149339) | 0.005841 / 0.007986 (-0.002144) | 0.003727 / 0.004328 (-0.000602) | 0.076015 / 0.004250 (0.071765) | 0.063188 / 0.037052 (0.026136) | 0.450555 / 0.258489 (0.192066) | 0.478532 / 0.293841 (0.184691) | 0.036258 / 0.128546 (-0.092288) | 0.009869 / 0.075646 (-0.065777) | 0.083786 / 0.419271 (-0.335486) | 0.056546 / 0.043533 (0.013013) | 0.449647 / 0.255139 (0.194508) | 0.457588 / 0.283200 (0.174389) | 0.027197 / 0.141683 (-0.114486) | 1.769991 / 1.452155 (0.317836) | 1.859905 / 1.492716 (0.367189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268637 / 0.018006 (0.250631) | 0.492860 / 0.000490 (0.492370) | 0.008574 / 0.000200 (0.008374) | 0.000140 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037679 / 0.037411 (0.000268) | 0.108258 / 0.014526 (0.093733) | 0.117850 / 0.176557 (-0.058707) | 0.181611 / 0.737135 (-0.555524) | 0.120901 / 0.296338 (-0.175437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485780 / 0.215209 (0.270571) | 4.851289 / 2.077655 (2.773635) | 2.486068 / 1.504120 (0.981948) | 2.299417 / 1.541195 (0.758222) | 2.387093 / 1.468490 (0.918603) | 0.568826 / 4.584777 (-4.015951) | 4.163426 / 3.745712 (0.417713) | 6.224964 / 5.269862 (0.955102) | 3.255619 / 4.565676 (-1.310058) | 0.067081 / 0.424275 (-0.357194) | 0.009065 / 0.007607 (0.001458) | 0.580449 / 0.226044 (0.354405) | 5.786394 / 2.268929 (3.517465) | 3.057780 / 55.444624 (-52.386844) | 2.764339 / 6.876477 (-4.112138) | 2.880718 / 2.142072 (0.738645) | 0.681376 / 4.805227 (-4.123851) | 0.157858 / 6.500664 (-6.342806) | 0.072481 / 0.075469 (-0.002988) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590704 / 1.841788 (-0.251083) | 23.141929 / 8.074308 (15.067620) | 17.001141 / 10.191392 (6.809749) | 0.203790 / 0.680424 (-0.476634) | 0.021766 / 0.534201 (-0.512435) | 0.475309 / 0.579283 (-0.103974) | 0.466448 / 0.434364 (0.032084) | 0.551470 / 0.540337 (0.011132) | 0.727876 / 1.386936 (-0.659060) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#61b19eea7fc5cf484e8cdf41d6ae035f94d8a671 \"CML watermark\")\n" ]
2023-07-27T18:54:47
2023-07-27T19:06:13
null
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6085", "html_url": "https://github.com/huggingface/datasets/pull/6085", "diff_url": "https://github.com/huggingface/datasets/pull/6085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6085.patch", "merged_at": null }
Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ... TODO: fix ``` self.session = aiobotocore.session.AioSession(**self.kwargs) TypeError: __init__() got an unexpected keyword argument 'hf' ``` by "preparing `storage_options`" for the `fsspec` head/get
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6085/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6084/comments
https://api.github.com/repos/huggingface/datasets/issues/6084/events
https://github.com/huggingface/datasets/issues/6084
1,824,896,761
I_kwDODunzps5sxbb5
6,084
Changing pixel values of images in the Winoground dataset
{ "login": "ZitengWangNYU", "id": 90359895, "node_id": "MDQ6VXNlcjkwMzU5ODk1", "avatar_url": "https://avatars.githubusercontent.com/u/90359895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZitengWangNYU", "html_url": "https://github.com/ZitengWangNYU", "followers_url": "https://api.github.com/users/ZitengWangNYU/followers", "following_url": "https://api.github.com/users/ZitengWangNYU/following{/other_user}", "gists_url": "https://api.github.com/users/ZitengWangNYU/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZitengWangNYU/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZitengWangNYU/subscriptions", "organizations_url": "https://api.github.com/users/ZitengWangNYU/orgs", "repos_url": "https://api.github.com/users/ZitengWangNYU/repos", "events_url": "https://api.github.com/users/ZitengWangNYU/events{/privacy}", "received_events_url": "https://api.github.com/users/ZitengWangNYU/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-27T17:55:35
2023-07-27T17:55:35
null
NONE
null
null
null
Hi, as I followed the instructions, with lasted "datasets" version: " from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) " I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slightly different. I thought it was due to the package version difference, but today's morning I found out that my winoground dataset in colab became the same with the one in my hpc environment. The dataset in colab can produce the correct result but now it is gone as well. Can you help me with this? What causes the datasets to have the wrong pixel values?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6084/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6083/comments
https://api.github.com/repos/huggingface/datasets/issues/6083/events
https://github.com/huggingface/datasets/pull/6083
1,824,832,348
PR_kwDODunzps5WkgAI
6,083
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6083). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006049 / 0.011353 (-0.005304) | 0.003698 / 0.011008 (-0.007310) | 0.080614 / 0.038508 (0.042106) | 0.060955 / 0.023109 (0.037846) | 0.337119 / 0.275898 (0.061221) | 0.369544 / 0.323480 (0.046064) | 0.004681 / 0.007986 (-0.003305) | 0.002892 / 0.004328 (-0.001436) | 0.062907 / 0.004250 (0.058657) | 0.049235 / 0.037052 (0.012183) | 0.338842 / 0.258489 (0.080353) | 0.371172 / 0.293841 (0.077331) | 0.027016 / 0.128546 (-0.101530) | 0.007940 / 0.075646 (-0.067706) | 0.260902 / 0.419271 (-0.158369) | 0.044566 / 0.043533 (0.001034) | 0.342354 / 0.255139 (0.087215) | 0.359829 / 0.283200 (0.076629) | 0.020801 / 0.141683 (-0.120881) | 1.444111 / 1.452155 (-0.008044) | 1.515595 / 1.492716 (0.022879) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183446 / 0.018006 (0.165439) | 0.437071 / 0.000490 (0.436581) | 0.003124 / 0.000200 (0.002924) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023760 / 0.037411 (-0.013651) | 0.072812 / 0.014526 (0.058286) | 0.082790 / 0.176557 (-0.093766) | 0.146330 / 0.737135 (-0.590805) | 0.084469 / 0.296338 (-0.211870) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395215 / 0.215209 (0.180006) | 3.953023 / 2.077655 (1.875369) | 1.914268 / 1.504120 (0.410148) | 1.710195 / 1.541195 (0.169001) | 1.782594 / 1.468490 (0.314104) | 0.503651 / 4.584777 (-4.081126) | 3.039656 / 3.745712 (-0.706056) | 4.364691 / 5.269862 (-0.905171) | 2.597762 / 4.565676 (-1.967915) | 0.057384 / 0.424275 (-0.366891) | 0.006419 / 0.007607 (-0.001188) | 0.467214 / 0.226044 (0.241169) | 4.661425 / 2.268929 (2.392497) | 2.341957 / 55.444624 (-53.102667) | 1.977598 / 6.876477 (-4.898878) | 2.178005 / 2.142072 (0.035933) | 0.588492 / 4.805227 (-4.216735) | 0.124972 / 6.500664 (-6.375692) | 0.060902 / 0.075469 (-0.014567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243092 / 1.841788 (-0.598695) | 18.369971 / 8.074308 (10.295663) | 13.939700 / 10.191392 (3.748308) | 0.149275 / 0.680424 (-0.531149) | 0.016873 / 0.534201 (-0.517328) | 0.334245 / 0.579283 (-0.245038) | 0.353832 / 0.434364 (-0.080532) | 0.382720 / 0.540337 (-0.157617) | 0.534634 / 1.386936 (-0.852302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005933 / 0.011353 (-0.005420) | 0.003695 / 0.011008 (-0.007313) | 0.063457 / 0.038508 (0.024949) | 0.062347 / 0.023109 (0.039238) | 0.412370 / 0.275898 (0.136472) | 0.450399 / 0.323480 (0.126920) | 0.004627 / 0.007986 (-0.003358) | 0.002822 / 0.004328 (-0.001507) | 0.063819 / 0.004250 (0.059569) | 0.049154 / 0.037052 (0.012101) | 0.428196 / 0.258489 (0.169707) | 0.464109 / 0.293841 (0.170268) | 0.026967 / 0.128546 (-0.101579) | 0.007876 / 0.075646 (-0.067770) | 0.068479 / 0.419271 (-0.350793) | 0.041080 / 0.043533 (-0.002453) | 0.399817 / 0.255139 (0.144678) | 0.426900 / 0.283200 (0.143701) | 0.019931 / 0.141683 (-0.121752) | 1.461642 / 1.452155 (0.009487) | 1.529314 / 1.492716 (0.036598) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230256 / 0.018006 (0.212249) | 0.423442 / 0.000490 (0.422952) | 0.002492 / 0.000200 (0.002292) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025798 / 0.037411 (-0.011613) | 0.077361 / 0.014526 (0.062836) | 0.088454 / 0.176557 (-0.088102) | 0.142137 / 0.737135 (-0.594998) | 0.088213 / 0.296338 (-0.208125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417656 / 0.215209 (0.202447) | 4.157095 / 2.077655 (2.079440) | 2.132863 / 1.504120 (0.628743) | 1.967220 / 1.541195 (0.426025) | 2.020505 / 1.468490 (0.552015) | 0.496835 / 4.584777 (-4.087942) | 2.989251 / 3.745712 (-0.756462) | 2.849315 / 5.269862 (-2.420546) | 1.848941 / 4.565676 (-2.716736) | 0.057307 / 0.424275 (-0.366968) | 0.006825 / 0.007607 (-0.000782) | 0.489103 / 0.226044 (0.263059) | 4.904776 / 2.268929 (2.635847) | 2.593914 / 55.444624 (-52.850710) | 2.253384 / 6.876477 (-4.623093) | 2.426384 / 2.142072 (0.284312) | 0.592467 / 4.805227 (-4.212760) | 0.126122 / 6.500664 (-6.374542) | 0.063160 / 0.075469 (-0.012309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313020 / 1.841788 (-0.528768) | 18.343984 / 8.074308 (10.269676) | 13.763060 / 10.191392 (3.571668) | 0.146312 / 0.680424 (-0.534111) | 0.016980 / 0.534201 (-0.517221) | 0.339572 / 0.579283 (-0.239711) | 0.351310 / 0.434364 (-0.083054) | 0.397616 / 0.540337 (-0.142721) | 0.536879 / 1.386936 (-0.850057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#73bed12ecda17d1573fd3bf73ed5db24d3622f86 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009979 / 0.011353 (-0.001374) | 0.005024 / 0.011008 (-0.005984) | 0.096566 / 0.038508 (0.058058) | 0.081181 / 0.023109 (0.058072) | 0.398415 / 0.275898 (0.122517) | 0.513971 / 0.323480 (0.190491) | 0.006716 / 0.007986 (-0.001269) | 0.004350 / 0.004328 (0.000022) | 0.071418 / 0.004250 (0.067168) | 0.065002 / 0.037052 (0.027949) | 0.424791 / 0.258489 (0.166302) | 0.442369 / 0.293841 (0.148528) | 0.054540 / 0.128546 (-0.074007) | 0.014067 / 0.075646 (-0.061580) | 0.368930 / 0.419271 (-0.050341) | 0.082468 / 0.043533 (0.038935) | 0.419875 / 0.255139 (0.164736) | 0.508308 / 0.283200 (0.225108) | 0.050411 / 0.141683 (-0.091272) | 1.582271 / 1.452155 (0.130116) | 1.842033 / 1.492716 (0.349317) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290427 / 0.018006 (0.272420) | 0.594736 / 0.000490 (0.594246) | 0.007058 / 0.000200 (0.006858) | 0.000149 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027085 / 0.037411 (-0.010326) | 0.087626 / 0.014526 (0.073101) | 0.094299 / 0.176557 (-0.082257) | 0.160169 / 0.737135 (-0.576966) | 0.101474 / 0.296338 (-0.194864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.545845 / 0.215209 (0.330636) | 5.674389 / 2.077655 (3.596734) | 2.489065 / 1.504120 (0.984945) | 2.166674 / 1.541195 (0.625479) | 2.166925 / 1.468490 (0.698434) | 0.791244 / 4.584777 (-3.793533) | 4.944878 / 3.745712 (1.199165) | 4.121628 / 5.269862 (-1.148234) | 2.701262 / 4.565676 (-1.864415) | 0.087609 / 0.424275 (-0.336666) | 0.006945 / 0.007607 (-0.000662) | 0.668478 / 0.226044 (0.442434) | 6.552813 / 2.268929 (4.283885) | 3.164698 / 55.444624 (-52.279927) | 2.447333 / 6.876477 (-4.429144) | 2.608271 / 2.142072 (0.466198) | 0.954202 / 4.805227 (-3.851025) | 0.187730 / 6.500664 (-6.312934) | 0.063229 / 0.075469 (-0.012240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.461042 / 1.841788 (-0.380746) | 21.601409 / 8.074308 (13.527101) | 18.553604 / 10.191392 (8.362212) | 0.234571 / 0.680424 (-0.445853) | 0.027119 / 0.534201 (-0.507082) | 0.423448 / 0.579283 (-0.155835) | 0.556397 / 0.434364 (0.122033) | 0.493958 / 0.540337 (-0.046379) | 0.711345 / 1.386936 (-0.675591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008637 / 0.011353 (-0.002716) | 0.014450 / 0.011008 (0.003442) | 0.084135 / 0.038508 (0.045627) | 0.080513 / 0.023109 (0.057403) | 0.557941 / 0.275898 (0.282042) | 0.563199 / 0.323480 (0.239719) | 0.006475 / 0.007986 (-0.001510) | 0.004407 / 0.004328 (0.000078) | 0.088537 / 0.004250 (0.084287) | 0.060871 / 0.037052 (0.023819) | 0.593077 / 0.258489 (0.334588) | 0.615572 / 0.293841 (0.321732) | 0.050157 / 0.128546 (-0.078389) | 0.014313 / 0.075646 (-0.061333) | 0.091784 / 0.419271 (-0.327487) | 0.065649 / 0.043533 (0.022116) | 0.532569 / 0.255139 (0.277430) | 0.580775 / 0.283200 (0.297575) | 0.036434 / 0.141683 (-0.105249) | 2.080051 / 1.452155 (0.627896) | 1.907430 / 1.492716 (0.414713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297763 / 0.018006 (0.279757) | 0.670408 / 0.000490 (0.669918) | 0.000467 / 0.000200 (0.000267) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007114) | 0.100310 / 0.014526 (0.085784) | 0.113158 / 0.176557 (-0.063398) | 0.149599 / 0.737135 (-0.587536) | 0.102620 / 0.296338 (-0.193718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616588 / 0.215209 (0.401379) | 6.572262 / 2.077655 (4.494608) | 2.830748 / 1.504120 (1.326628) | 2.478441 / 1.541195 (0.937246) | 2.573017 / 1.468490 (1.104527) | 0.844154 / 4.584777 (-3.740623) | 5.161625 / 3.745712 (1.415913) | 4.541114 / 5.269862 (-0.728748) | 2.907804 / 4.565676 (-1.657872) | 0.097044 / 0.424275 (-0.327231) | 0.008692 / 0.007607 (0.001085) | 0.806640 / 0.226044 (0.580595) | 7.620521 / 2.268929 (5.351593) | 3.587100 / 55.444624 (-51.857524) | 2.901319 / 6.876477 (-3.975157) | 3.091288 / 2.142072 (0.949215) | 1.056109 / 4.805227 (-3.749118) | 0.209860 / 6.500664 (-6.290804) | 0.079575 / 0.075469 (0.004106) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.966194 / 1.841788 (0.124407) | 28.040515 / 8.074308 (19.966207) | 25.848647 / 10.191392 (15.657255) | 0.255472 / 0.680424 (-0.424951) | 0.036154 / 0.534201 (-0.498046) | 0.515168 / 0.579283 (-0.064115) | 0.696092 / 0.434364 (0.261728) | 0.602712 / 0.540337 (0.062374) | 0.781091 / 1.386936 (-0.605845) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6f641aca7fbb1f21da48c087a5c10e76f4c6be35 \"CML watermark\")\n" ]
2023-07-27T17:10:41
2023-07-27T17:22:05
2023-07-27T17:11:01
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6083", "html_url": "https://github.com/huggingface/datasets/pull/6083", "diff_url": "https://github.com/huggingface/datasets/pull/6083.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6083.patch", "merged_at": "2023-07-27T17:11:01" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6083/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6082/comments
https://api.github.com/repos/huggingface/datasets/issues/6082/events
https://github.com/huggingface/datasets/pull/6082
1,824,819,672
PR_kwDODunzps5WkdIn
6,082
Release: 2.14.1
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6082). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007215 / 0.011353 (-0.004138) | 0.004101 / 0.011008 (-0.006907) | 0.085884 / 0.038508 (0.047376) | 0.085375 / 0.023109 (0.062266) | 0.351610 / 0.275898 (0.075712) | 0.399284 / 0.323480 (0.075804) | 0.005598 / 0.007986 (-0.002388) | 0.003405 / 0.004328 (-0.000923) | 0.064906 / 0.004250 (0.060656) | 0.059000 / 0.037052 (0.021948) | 0.354589 / 0.258489 (0.096100) | 0.406070 / 0.293841 (0.112229) | 0.031627 / 0.128546 (-0.096919) | 0.008597 / 0.075646 (-0.067049) | 0.291050 / 0.419271 (-0.128221) | 0.054120 / 0.043533 (0.010587) | 0.366242 / 0.255139 (0.111103) | 0.375975 / 0.283200 (0.092776) | 0.025608 / 0.141683 (-0.116074) | 1.473514 / 1.452155 (0.021359) | 1.543226 / 1.492716 (0.050510) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198068 / 0.018006 (0.180062) | 0.450583 / 0.000490 (0.450093) | 0.005368 / 0.000200 (0.005168) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028323 / 0.037411 (-0.009089) | 0.089058 / 0.014526 (0.074533) | 0.097718 / 0.176557 (-0.078839) | 0.154546 / 0.737135 (-0.582590) | 0.098224 / 0.296338 (-0.198115) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386292 / 0.215209 (0.171083) | 3.846222 / 2.077655 (1.768567) | 1.858695 / 1.504120 (0.354575) | 1.685885 / 1.541195 (0.144690) | 1.790727 / 1.468490 (0.322237) | 0.486771 / 4.584777 (-4.098006) | 3.658363 / 3.745712 (-0.087349) | 5.345236 / 5.269862 (0.075374) | 3.215942 / 4.565676 (-1.349734) | 0.057580 / 0.424275 (-0.366695) | 0.007382 / 0.007607 (-0.000225) | 0.464174 / 0.226044 (0.238129) | 4.640848 / 2.268929 (2.371920) | 2.383152 / 55.444624 (-53.061472) | 2.013288 / 6.876477 (-4.863188) | 2.244142 / 2.142072 (0.102069) | 0.585408 / 4.805227 (-4.219819) | 0.134698 / 6.500664 (-6.365966) | 0.060641 / 0.075469 (-0.014828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258414 / 1.841788 (-0.583374) | 19.825848 / 8.074308 (11.751540) | 14.644025 / 10.191392 (4.452633) | 0.169198 / 0.680424 (-0.511226) | 0.018180 / 0.534201 (-0.516021) | 0.395100 / 0.579283 (-0.184183) | 0.411543 / 0.434364 (-0.022821) | 0.463364 / 0.540337 (-0.076973) | 0.628613 / 1.386936 (-0.758323) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.003981 / 0.011008 (-0.007027) | 0.065589 / 0.038508 (0.027081) | 0.082460 / 0.023109 (0.059350) | 0.362980 / 0.275898 (0.087082) | 0.394837 / 0.323480 (0.071357) | 0.005298 / 0.007986 (-0.002688) | 0.003372 / 0.004328 (-0.000957) | 0.064918 / 0.004250 (0.060667) | 0.058033 / 0.037052 (0.020981) | 0.367259 / 0.258489 (0.108770) | 0.403122 / 0.293841 (0.109281) | 0.031566 / 0.128546 (-0.096980) | 0.008583 / 0.075646 (-0.067063) | 0.071287 / 0.419271 (-0.347984) | 0.049586 / 0.043533 (0.006053) | 0.359252 / 0.255139 (0.104113) | 0.378519 / 0.283200 (0.095319) | 0.023412 / 0.141683 (-0.118271) | 1.494522 / 1.452155 (0.042367) | 1.559176 / 1.492716 (0.066460) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228396 / 0.018006 (0.210390) | 0.441865 / 0.000490 (0.441375) | 0.000395 / 0.000200 (0.000195) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031169 / 0.037411 (-0.006242) | 0.093427 / 0.014526 (0.078901) | 0.100673 / 0.176557 (-0.075883) | 0.152817 / 0.737135 (-0.584319) | 0.102226 / 0.296338 (-0.194112) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437032 / 0.215209 (0.221823) | 4.376078 / 2.077655 (2.298423) | 2.346928 / 1.504120 (0.842808) | 2.168573 / 1.541195 (0.627378) | 2.261024 / 1.468490 (0.792534) | 0.497080 / 4.584777 (-4.087697) | 3.594402 / 3.745712 (-0.151310) | 5.090361 / 5.269862 (-0.179501) | 3.034750 / 4.565676 (-1.530927) | 0.058538 / 0.424275 (-0.365737) | 0.007892 / 0.007607 (0.000285) | 0.517643 / 0.226044 (0.291598) | 5.173174 / 2.268929 (2.904246) | 2.825917 / 55.444624 (-52.618708) | 2.542593 / 6.876477 (-4.333884) | 2.716290 / 2.142072 (0.574218) | 0.598253 / 4.805227 (-4.206974) | 0.135610 / 6.500664 (-6.365054) | 0.062113 / 0.075469 (-0.013356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.389554 / 1.841788 (-0.452233) | 20.412868 / 8.074308 (12.338560) | 14.539988 / 10.191392 (4.348596) | 0.162046 / 0.680424 (-0.518378) | 0.018508 / 0.534201 (-0.515693) | 0.398840 / 0.579283 (-0.180443) | 0.400902 / 0.434364 (-0.033462) | 0.463647 / 0.540337 (-0.076691) | 0.612921 / 1.386936 (-0.774015) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45bef1810d9341ba4cb27547d748fddb97843792 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005943 / 0.011353 (-0.005410) | 0.003582 / 0.011008 (-0.007426) | 0.080030 / 0.038508 (0.041522) | 0.057458 / 0.023109 (0.034349) | 0.390783 / 0.275898 (0.114885) | 0.430926 / 0.323480 (0.107446) | 0.003207 / 0.007986 (-0.004778) | 0.003592 / 0.004328 (-0.000737) | 0.062468 / 0.004250 (0.058217) | 0.046739 / 0.037052 (0.009687) | 0.394343 / 0.258489 (0.135854) | 0.435912 / 0.293841 (0.142071) | 0.026812 / 0.128546 (-0.101734) | 0.007954 / 0.075646 (-0.067692) | 0.261415 / 0.419271 (-0.157857) | 0.044665 / 0.043533 (0.001132) | 0.403454 / 0.255139 (0.148315) | 0.418946 / 0.283200 (0.135747) | 0.022247 / 0.141683 (-0.119436) | 1.456387 / 1.452155 (0.004232) | 1.508234 / 1.492716 (0.015518) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182487 / 0.018006 (0.164480) | 0.416343 / 0.000490 (0.415854) | 0.001404 / 0.000200 (0.001204) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023643 / 0.037411 (-0.013768) | 0.071798 / 0.014526 (0.057272) | 0.083623 / 0.176557 (-0.092933) | 0.146023 / 0.737135 (-0.591112) | 0.083094 / 0.296338 (-0.213245) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417484 / 0.215209 (0.202275) | 4.157393 / 2.077655 (2.079738) | 1.950438 / 1.504120 (0.446318) | 1.766639 / 1.541195 (0.225444) | 1.807382 / 1.468490 (0.338892) | 0.496061 / 4.584777 (-4.088716) | 2.975001 / 3.745712 (-0.770711) | 3.340608 / 5.269862 (-1.929254) | 2.236293 / 4.565676 (-2.329384) | 0.056946 / 0.424275 (-0.367329) | 0.006506 / 0.007607 (-0.001101) | 0.480377 / 0.226044 (0.254332) | 4.788525 / 2.268929 (2.519597) | 2.430139 / 55.444624 (-53.014485) | 2.154145 / 6.876477 (-4.722332) | 2.321623 / 2.142072 (0.179551) | 0.584040 / 4.805227 (-4.221188) | 0.124508 / 6.500664 (-6.376156) | 0.060828 / 0.075469 (-0.014641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.201641 / 1.841788 (-0.640146) | 18.066232 / 8.074308 (9.991924) | 14.022304 / 10.191392 (3.830912) | 0.146573 / 0.680424 (-0.533850) | 0.016892 / 0.534201 (-0.517308) | 0.333259 / 0.579283 (-0.246024) | 0.357795 / 0.434364 (-0.076568) | 0.391265 / 0.540337 (-0.149072) | 0.551378 / 1.386936 (-0.835558) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005706 / 0.011353 (-0.005647) | 0.003448 / 0.011008 (-0.007560) | 0.063146 / 0.038508 (0.024638) | 0.056292 / 0.023109 (0.033183) | 0.355533 / 0.275898 (0.079635) | 0.394996 / 0.323480 (0.071517) | 0.004270 / 0.007986 (-0.003716) | 0.002790 / 0.004328 (-0.001538) | 0.063033 / 0.004250 (0.058783) | 0.044684 / 0.037052 (0.007631) | 0.370621 / 0.258489 (0.112132) | 0.401074 / 0.293841 (0.107233) | 0.026737 / 0.128546 (-0.101809) | 0.007872 / 0.075646 (-0.067774) | 0.068815 / 0.419271 (-0.350457) | 0.040976 / 0.043533 (-0.002557) | 0.370733 / 0.255139 (0.115594) | 0.387418 / 0.283200 (0.104218) | 0.018854 / 0.141683 (-0.122829) | 1.479834 / 1.452155 (0.027680) | 1.536388 / 1.492716 (0.043672) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222125 / 0.018006 (0.204119) | 0.408007 / 0.000490 (0.407517) | 0.000367 / 0.000200 (0.000167) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025100 / 0.037411 (-0.012311) | 0.076617 / 0.014526 (0.062091) | 0.088311 / 0.176557 (-0.088246) | 0.143785 / 0.737135 (-0.593350) | 0.088349 / 0.296338 (-0.207989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419246 / 0.215209 (0.204037) | 4.172413 / 2.077655 (2.094759) | 2.199355 / 1.504120 (0.695235) | 2.025158 / 1.541195 (0.483963) | 2.074491 / 1.468490 (0.606001) | 0.495893 / 4.584777 (-4.088884) | 2.998858 / 3.745712 (-0.746854) | 2.770531 / 5.269862 (-2.499331) | 1.817497 / 4.565676 (-2.748179) | 0.057317 / 0.424275 (-0.366958) | 0.006723 / 0.007607 (-0.000884) | 0.491062 / 0.226044 (0.265017) | 4.906155 / 2.268929 (2.637226) | 2.654916 / 55.444624 (-52.789708) | 2.299873 / 6.876477 (-4.576604) | 2.451438 / 2.142072 (0.309366) | 0.585048 / 4.805227 (-4.220179) | 0.124778 / 6.500664 (-6.375886) | 0.062067 / 0.075469 (-0.013402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298239 / 1.841788 (-0.543549) | 18.090238 / 8.074308 (10.015930) | 13.822568 / 10.191392 (3.631176) | 0.130560 / 0.680424 (-0.549864) | 0.016662 / 0.534201 (-0.517539) | 0.333337 / 0.579283 (-0.245946) | 0.348493 / 0.434364 (-0.085871) | 0.386049 / 0.540337 (-0.154289) | 0.511156 / 1.386936 (-0.875780) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006014 / 0.011353 (-0.005339) | 0.003623 / 0.011008 (-0.007385) | 0.080500 / 0.038508 (0.041992) | 0.057713 / 0.023109 (0.034603) | 0.325976 / 0.275898 (0.050078) | 0.359986 / 0.323480 (0.036506) | 0.004709 / 0.007986 (-0.003277) | 0.002933 / 0.004328 (-0.001395) | 0.063457 / 0.004250 (0.059207) | 0.047514 / 0.037052 (0.010462) | 0.331629 / 0.258489 (0.073140) | 0.382048 / 0.293841 (0.088207) | 0.026949 / 0.128546 (-0.101597) | 0.008043 / 0.075646 (-0.067604) | 0.262152 / 0.419271 (-0.157119) | 0.045271 / 0.043533 (0.001738) | 0.333355 / 0.255139 (0.078216) | 0.347996 / 0.283200 (0.064796) | 0.020814 / 0.141683 (-0.120868) | 1.460723 / 1.452155 (0.008568) | 1.488845 / 1.492716 (-0.003872) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193735 / 0.018006 (0.175728) | 0.431433 / 0.000490 (0.430943) | 0.002494 / 0.000200 (0.002294) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023762 / 0.037411 (-0.013650) | 0.072680 / 0.014526 (0.058154) | 0.081687 / 0.176557 (-0.094869) | 0.143224 / 0.737135 (-0.593911) | 0.083083 / 0.296338 (-0.213255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397393 / 0.215209 (0.182184) | 3.954643 / 2.077655 (1.876989) | 1.950038 / 1.504120 (0.445919) | 1.760551 / 1.541195 (0.219357) | 1.871165 / 1.468490 (0.402675) | 0.508645 / 4.584777 (-4.076132) | 3.114379 / 3.745712 (-0.631333) | 3.474554 / 5.269862 (-1.795307) | 2.090126 / 4.565676 (-2.475551) | 0.058008 / 0.424275 (-0.366267) | 0.006465 / 0.007607 (-0.001142) | 0.475009 / 0.226044 (0.248965) | 4.767981 / 2.268929 (2.499052) | 2.372050 / 55.444624 (-53.072574) | 2.038094 / 6.876477 (-4.838383) | 2.072819 / 2.142072 (-0.069253) | 0.591913 / 4.805227 (-4.213314) | 0.125002 / 6.500664 (-6.375662) | 0.060055 / 0.075469 (-0.015414) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234171 / 1.841788 (-0.607617) | 18.121476 / 8.074308 (10.047168) | 13.727313 / 10.191392 (3.535921) | 0.136021 / 0.680424 (-0.544402) | 0.016505 / 0.534201 (-0.517696) | 0.331400 / 0.579283 (-0.247883) | 0.346019 / 0.434364 (-0.088345) | 0.378985 / 0.540337 (-0.161353) | 0.522606 / 1.386936 (-0.864330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006035 / 0.011353 (-0.005318) | 0.003584 / 0.011008 (-0.007425) | 0.061953 / 0.038508 (0.023445) | 0.059416 / 0.023109 (0.036307) | 0.359380 / 0.275898 (0.083482) | 0.396842 / 0.323480 (0.073363) | 0.004716 / 0.007986 (-0.003269) | 0.002825 / 0.004328 (-0.001504) | 0.061697 / 0.004250 (0.057447) | 0.049009 / 0.037052 (0.011956) | 0.363099 / 0.258489 (0.104610) | 0.403672 / 0.293841 (0.109831) | 0.027722 / 0.128546 (-0.100824) | 0.007966 / 0.075646 (-0.067680) | 0.067455 / 0.419271 (-0.351816) | 0.042530 / 0.043533 (-0.001003) | 0.361257 / 0.255139 (0.106118) | 0.388957 / 0.283200 (0.105758) | 0.021845 / 0.141683 (-0.119838) | 1.431989 / 1.452155 (-0.020166) | 1.503131 / 1.492716 (0.010415) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241493 / 0.018006 (0.223487) | 0.429319 / 0.000490 (0.428829) | 0.002604 / 0.000200 (0.002404) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026227 / 0.037411 (-0.011184) | 0.077177 / 0.014526 (0.062651) | 0.085840 / 0.176557 (-0.090717) | 0.142280 / 0.737135 (-0.594855) | 0.088465 / 0.296338 (-0.207873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434912 / 0.215209 (0.219703) | 4.339664 / 2.077655 (2.262009) | 2.242495 / 1.504120 (0.738375) | 2.091353 / 1.541195 (0.550159) | 2.161425 / 1.468490 (0.692935) | 0.501647 / 4.584777 (-4.083130) | 3.075326 / 3.745712 (-0.670386) | 4.091557 / 5.269862 (-1.178304) | 2.776425 / 4.565676 (-1.789251) | 0.057338 / 0.424275 (-0.366937) | 0.006767 / 0.007607 (-0.000840) | 0.506882 / 0.226044 (0.280837) | 5.059074 / 2.268929 (2.790146) | 2.706665 / 55.444624 (-52.737959) | 2.370253 / 6.876477 (-4.506224) | 2.505421 / 2.142072 (0.363348) | 0.590289 / 4.805227 (-4.214938) | 0.125990 / 6.500664 (-6.374674) | 0.062778 / 0.075469 (-0.012691) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.361287 / 1.841788 (-0.480501) | 18.500726 / 8.074308 (10.426418) | 13.844459 / 10.191392 (3.653067) | 0.144416 / 0.680424 (-0.536008) | 0.016987 / 0.534201 (-0.517214) | 0.336237 / 0.579283 (-0.243046) | 0.357116 / 0.434364 (-0.077248) | 0.402062 / 0.540337 (-0.138275) | 0.543066 / 1.386936 (-0.843870) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007559 / 0.011353 (-0.003794) | 0.004379 / 0.011008 (-0.006629) | 0.089702 / 0.038508 (0.051194) | 0.065104 / 0.023109 (0.041995) | 0.362016 / 0.275898 (0.086118) | 0.376768 / 0.323480 (0.053288) | 0.006538 / 0.007986 (-0.001447) | 0.004167 / 0.004328 (-0.000161) | 0.074138 / 0.004250 (0.069888) | 0.052753 / 0.037052 (0.015701) | 0.366367 / 0.258489 (0.107878) | 0.389121 / 0.293841 (0.095280) | 0.042820 / 0.128546 (-0.085727) | 0.012560 / 0.075646 (-0.063086) | 0.359235 / 0.419271 (-0.060037) | 0.074250 / 0.043533 (0.030718) | 0.384051 / 0.255139 (0.128912) | 0.385450 / 0.283200 (0.102250) | 0.046270 / 0.141683 (-0.095413) | 1.593275 / 1.452155 (0.141120) | 1.704207 / 1.492716 (0.211490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249390 / 0.018006 (0.231384) | 0.614347 / 0.000490 (0.613857) | 0.012641 / 0.000200 (0.012441) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029099 / 0.037411 (-0.008312) | 0.090966 / 0.014526 (0.076440) | 0.102273 / 0.176557 (-0.074284) | 0.167564 / 0.737135 (-0.569571) | 0.106118 / 0.296338 (-0.190220) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.536122 / 0.215209 (0.320913) | 5.448464 / 2.077655 (3.370809) | 2.461977 / 1.504120 (0.957857) | 2.081506 / 1.541195 (0.540311) | 2.091509 / 1.468490 (0.623019) | 0.810307 / 4.584777 (-3.774470) | 5.161304 / 3.745712 (1.415592) | 4.525070 / 5.269862 (-0.744792) | 2.886313 / 4.565676 (-1.679363) | 0.093992 / 0.424275 (-0.330283) | 0.008516 / 0.007607 (0.000909) | 0.691978 / 0.226044 (0.465934) | 6.834665 / 2.268929 (4.565737) | 3.284355 / 55.444624 (-52.160270) | 2.496803 / 6.876477 (-4.379674) | 2.814387 / 2.142072 (0.672315) | 0.985300 / 4.805227 (-3.819928) | 0.210343 / 6.500664 (-6.290321) | 0.075459 / 0.075469 (-0.000010) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.436073 / 1.841788 (-0.405714) | 22.722401 / 8.074308 (14.648093) | 19.988521 / 10.191392 (9.797129) | 0.229757 / 0.680424 (-0.450667) | 0.029672 / 0.534201 (-0.504529) | 0.479914 / 0.579283 (-0.099369) | 0.605106 / 0.434364 (0.170743) | 0.511668 / 0.540337 (-0.028670) | 0.800281 / 1.386936 (-0.586655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008665 / 0.011353 (-0.002688) | 0.006009 / 0.011008 (-0.004999) | 0.073377 / 0.038508 (0.034869) | 0.077188 / 0.023109 (0.054079) | 0.451422 / 0.275898 (0.175524) | 0.484640 / 0.323480 (0.161160) | 0.006266 / 0.007986 (-0.001719) | 0.004129 / 0.004328 (-0.000200) | 0.063102 / 0.004250 (0.058851) | 0.064653 / 0.037052 (0.027601) | 0.439521 / 0.258489 (0.181032) | 0.458964 / 0.293841 (0.165123) | 0.046018 / 0.128546 (-0.082528) | 0.014109 / 0.075646 (-0.061537) | 0.095727 / 0.419271 (-0.323544) | 0.070133 / 0.043533 (0.026600) | 0.440143 / 0.255139 (0.185004) | 0.502468 / 0.283200 (0.219269) | 0.034582 / 0.141683 (-0.107101) | 1.656282 / 1.452155 (0.204127) | 1.784641 / 1.492716 (0.291925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303111 / 0.018006 (0.285105) | 0.599194 / 0.000490 (0.598705) | 0.000411 / 0.000200 (0.000211) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033061 / 0.037411 (-0.004350) | 0.096073 / 0.014526 (0.081548) | 0.095347 / 0.176557 (-0.081209) | 0.161004 / 0.737135 (-0.576131) | 0.111544 / 0.296338 (-0.184794) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.615695 / 0.215209 (0.400486) | 5.794243 / 2.077655 (3.716588) | 2.594720 / 1.504120 (1.090600) | 2.566255 / 1.541195 (1.025060) | 2.573653 / 1.468490 (1.105163) | 0.873653 / 4.584777 (-3.711124) | 5.353323 / 3.745712 (1.607611) | 4.604974 / 5.269862 (-0.664887) | 2.901282 / 4.565676 (-1.664394) | 0.099614 / 0.424275 (-0.324661) | 0.010368 / 0.007607 (0.002761) | 0.775490 / 0.226044 (0.549446) | 7.245449 / 2.268929 (4.976520) | 3.740165 / 55.444624 (-51.704459) | 2.986132 / 6.876477 (-3.890345) | 3.092510 / 2.142072 (0.950438) | 1.022461 / 4.805227 (-3.782766) | 0.212137 / 6.500664 (-6.288527) | 0.084534 / 0.075469 (0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.687983 / 1.841788 (-0.153805) | 23.491808 / 8.074308 (15.417500) | 20.722165 / 10.191392 (10.530773) | 0.231011 / 0.680424 (-0.449413) | 0.028309 / 0.534201 (-0.505892) | 0.436911 / 0.579283 (-0.142372) | 0.583126 / 0.434364 (0.148762) | 0.559712 / 0.540337 (0.019374) | 0.820645 / 1.386936 (-0.566291) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006538 / 0.011353 (-0.004815) | 0.003952 / 0.011008 (-0.007056) | 0.084183 / 0.038508 (0.045675) | 0.070616 / 0.023109 (0.047507) | 0.320491 / 0.275898 (0.044593) | 0.352021 / 0.323480 (0.028541) | 0.005330 / 0.007986 (-0.002656) | 0.003400 / 0.004328 (-0.000928) | 0.066392 / 0.004250 (0.062141) | 0.052529 / 0.037052 (0.015477) | 0.329581 / 0.258489 (0.071092) | 0.374437 / 0.293841 (0.080596) | 0.031379 / 0.128546 (-0.097167) | 0.008576 / 0.075646 (-0.067070) | 0.288621 / 0.419271 (-0.130650) | 0.052748 / 0.043533 (0.009215) | 0.319911 / 0.255139 (0.064772) | 0.358169 / 0.283200 (0.074970) | 0.023128 / 0.141683 (-0.118555) | 1.479578 / 1.452155 (0.027424) | 1.566351 / 1.492716 (0.073635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217616 / 0.018006 (0.199610) | 0.471546 / 0.000490 (0.471056) | 0.003880 / 0.000200 (0.003680) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027716 / 0.037411 (-0.009696) | 0.081718 / 0.014526 (0.067192) | 0.095457 / 0.176557 (-0.081100) | 0.150746 / 0.737135 (-0.586389) | 0.096061 / 0.296338 (-0.200277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406811 / 0.215209 (0.191602) | 4.062757 / 2.077655 (1.985103) | 2.060658 / 1.504120 (0.556538) | 1.870944 / 1.541195 (0.329749) | 1.908984 / 1.468490 (0.440493) | 0.489053 / 4.584777 (-4.095724) | 3.571038 / 3.745712 (-0.174674) | 3.255351 / 5.269862 (-2.014511) | 2.007078 / 4.565676 (-2.558599) | 0.057078 / 0.424275 (-0.367197) | 0.007240 / 0.007607 (-0.000367) | 0.485641 / 0.226044 (0.259596) | 4.841657 / 2.268929 (2.572729) | 2.569676 / 55.444624 (-52.874949) | 2.151119 / 6.876477 (-4.725357) | 2.330337 / 2.142072 (0.188265) | 0.581721 / 4.805227 (-4.223506) | 0.132591 / 6.500664 (-6.368073) | 0.060491 / 0.075469 (-0.014978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237699 / 1.841788 (-0.604089) | 19.460306 / 8.074308 (11.385998) | 14.123006 / 10.191392 (3.931614) | 0.155669 / 0.680424 (-0.524754) | 0.018385 / 0.534201 (-0.515816) | 0.393330 / 0.579283 (-0.185953) | 0.408890 / 0.434364 (-0.025474) | 0.457348 / 0.540337 (-0.082989) | 0.640293 / 1.386936 (-0.746643) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006582 / 0.011353 (-0.004771) | 0.003950 / 0.011008 (-0.007059) | 0.064636 / 0.038508 (0.026128) | 0.077651 / 0.023109 (0.054541) | 0.365505 / 0.275898 (0.089607) | 0.393370 / 0.323480 (0.069890) | 0.005466 / 0.007986 (-0.002520) | 0.003314 / 0.004328 (-0.001014) | 0.064960 / 0.004250 (0.060710) | 0.057355 / 0.037052 (0.020302) | 0.377773 / 0.258489 (0.119284) | 0.408394 / 0.293841 (0.114553) | 0.031698 / 0.128546 (-0.096848) | 0.008575 / 0.075646 (-0.067071) | 0.070390 / 0.419271 (-0.348881) | 0.050035 / 0.043533 (0.006502) | 0.360461 / 0.255139 (0.105323) | 0.384862 / 0.283200 (0.101662) | 0.025380 / 0.141683 (-0.116303) | 1.484429 / 1.452155 (0.032275) | 1.542944 / 1.492716 (0.050227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190193 / 0.018006 (0.172187) | 0.468996 / 0.000490 (0.468506) | 0.003012 / 0.000200 (0.002812) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031488 / 0.037411 (-0.005923) | 0.088673 / 0.014526 (0.074147) | 0.101886 / 0.176557 (-0.074670) | 0.156774 / 0.737135 (-0.580361) | 0.102818 / 0.296338 (-0.193520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428019 / 0.215209 (0.212810) | 4.271369 / 2.077655 (2.193714) | 2.271530 / 1.504120 (0.767410) | 2.085172 / 1.541195 (0.543977) | 2.143439 / 1.468490 (0.674949) | 0.493468 / 4.584777 (-4.091309) | 3.569030 / 3.745712 (-0.176683) | 4.777962 / 5.269862 (-0.491900) | 2.872115 / 4.565676 (-1.693562) | 0.058200 / 0.424275 (-0.366075) | 0.007657 / 0.007607 (0.000050) | 0.502874 / 0.226044 (0.276830) | 5.026721 / 2.268929 (2.757792) | 2.734301 / 55.444624 (-52.710324) | 2.396072 / 6.876477 (-4.480405) | 2.574322 / 2.142072 (0.432249) | 0.593855 / 4.805227 (-4.211373) | 0.135134 / 6.500664 (-6.365530) | 0.061491 / 0.075469 (-0.013978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.320522 / 1.841788 (-0.521265) | 19.933221 / 8.074308 (11.858912) | 14.055921 / 10.191392 (3.864529) | 0.149620 / 0.680424 (-0.530804) | 0.018590 / 0.534201 (-0.515611) | 0.399550 / 0.579283 (-0.179733) | 0.410463 / 0.434364 (-0.023901) | 0.469872 / 0.540337 (-0.070465) | 0.616481 / 1.386936 (-0.770455) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#029956a347b0306cd27f693e12cf9a82acf4ef80 \"CML watermark\")\n" ]
2023-07-27T17:05:54
2023-07-31T06:32:16
2023-07-27T17:08:38
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6082", "html_url": "https://github.com/huggingface/datasets/pull/6082", "diff_url": "https://github.com/huggingface/datasets/pull/6082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6082.patch", "merged_at": "2023-07-27T17:08:38" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6082/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6081/comments
https://api.github.com/repos/huggingface/datasets/issues/6081/events
https://github.com/huggingface/datasets/pull/6081
1,824,486,278
PR_kwDODunzps5WjU0k
6,081
Deprecate `Dataset.export`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006680 / 0.011353 (-0.004673) | 0.003987 / 0.011008 (-0.007021) | 0.084677 / 0.038508 (0.046169) | 0.076800 / 0.023109 (0.053691) | 0.358338 / 0.275898 (0.082440) | 0.386573 / 0.323480 (0.063094) | 0.005370 / 0.007986 (-0.002616) | 0.003323 / 0.004328 (-0.001005) | 0.064238 / 0.004250 (0.059988) | 0.057859 / 0.037052 (0.020806) | 0.355408 / 0.258489 (0.096919) | 0.388302 / 0.293841 (0.094461) | 0.030784 / 0.128546 (-0.097762) | 0.008381 / 0.075646 (-0.067266) | 0.287971 / 0.419271 (-0.131300) | 0.053078 / 0.043533 (0.009545) | 0.352719 / 0.255139 (0.097580) | 0.370319 / 0.283200 (0.087119) | 0.023064 / 0.141683 (-0.118619) | 1.480661 / 1.452155 (0.028507) | 1.555711 / 1.492716 (0.062995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211289 / 0.018006 (0.193283) | 0.466957 / 0.000490 (0.466467) | 0.003760 / 0.000200 (0.003561) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028552 / 0.037411 (-0.008859) | 0.084469 / 0.014526 (0.069943) | 0.096027 / 0.176557 (-0.080529) | 0.152170 / 0.737135 (-0.584965) | 0.096513 / 0.296338 (-0.199825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382940 / 0.215209 (0.167731) | 3.841735 / 2.077655 (1.764080) | 1.850575 / 1.504120 (0.346455) | 1.676554 / 1.541195 (0.135360) | 1.765241 / 1.468490 (0.296751) | 0.482131 / 4.584777 (-4.102646) | 3.512739 / 3.745712 (-0.232973) | 3.977042 / 5.269862 (-1.292820) | 2.387568 / 4.565676 (-2.178109) | 0.056657 / 0.424275 (-0.367618) | 0.007283 / 0.007607 (-0.000324) | 0.468193 / 0.226044 (0.242149) | 4.704077 / 2.268929 (2.435149) | 2.373467 / 55.444624 (-53.071157) | 2.002470 / 6.876477 (-4.874007) | 2.228280 / 2.142072 (0.086208) | 0.576908 / 4.805227 (-4.228320) | 0.132000 / 6.500664 (-6.368664) | 0.060544 / 0.075469 (-0.014926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256168 / 1.841788 (-0.585619) | 19.965458 / 8.074308 (11.891150) | 14.521435 / 10.191392 (4.330043) | 0.159156 / 0.680424 (-0.521268) | 0.018170 / 0.534201 (-0.516031) | 0.393019 / 0.579283 (-0.186264) | 0.415002 / 0.434364 (-0.019362) | 0.471810 / 0.540337 (-0.068528) | 0.658907 / 1.386936 (-0.728029) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006836 / 0.011353 (-0.004517) | 0.004067 / 0.011008 (-0.006942) | 0.066242 / 0.038508 (0.027734) | 0.078601 / 0.023109 (0.055491) | 0.369371 / 0.275898 (0.093473) | 0.402026 / 0.323480 (0.078546) | 0.006097 / 0.007986 (-0.001889) | 0.003337 / 0.004328 (-0.000991) | 0.065854 / 0.004250 (0.061603) | 0.057665 / 0.037052 (0.020612) | 0.379709 / 0.258489 (0.121219) | 0.406868 / 0.293841 (0.113027) | 0.031946 / 0.128546 (-0.096600) | 0.008691 / 0.075646 (-0.066955) | 0.071430 / 0.419271 (-0.347841) | 0.049518 / 0.043533 (0.005986) | 0.370439 / 0.255139 (0.115300) | 0.389235 / 0.283200 (0.106036) | 0.023730 / 0.141683 (-0.117953) | 1.509035 / 1.452155 (0.056880) | 1.548890 / 1.492716 (0.056173) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229264 / 0.018006 (0.211258) | 0.445801 / 0.000490 (0.445312) | 0.000363 / 0.000200 (0.000163) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032377 / 0.037411 (-0.005034) | 0.091082 / 0.014526 (0.076556) | 0.104816 / 0.176557 (-0.071740) | 0.161040 / 0.737135 (-0.576095) | 0.105165 / 0.296338 (-0.191173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411012 / 0.215209 (0.195803) | 4.097256 / 2.077655 (2.019602) | 2.088686 / 1.504120 (0.584566) | 1.934429 / 1.541195 (0.393234) | 2.027387 / 1.468490 (0.558896) | 0.476262 / 4.584777 (-4.108515) | 3.518416 / 3.745712 (-0.227296) | 3.260919 / 5.269862 (-2.008943) | 2.041441 / 4.565676 (-2.524235) | 0.056302 / 0.424275 (-0.367973) | 0.007750 / 0.007607 (0.000143) | 0.489966 / 0.226044 (0.263922) | 4.915844 / 2.268929 (2.646916) | 2.617001 / 55.444624 (-52.827623) | 2.333557 / 6.876477 (-4.542920) | 2.484530 / 2.142072 (0.342458) | 0.572009 / 4.805227 (-4.233219) | 0.142557 / 6.500664 (-6.358107) | 0.066711 / 0.075469 (-0.008758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.359929 / 1.841788 (-0.481859) | 20.332252 / 8.074308 (12.257943) | 14.585842 / 10.191392 (4.394450) | 0.170498 / 0.680424 (-0.509926) | 0.018450 / 0.534201 (-0.515751) | 0.395449 / 0.579283 (-0.183834) | 0.409666 / 0.434364 (-0.024698) | 0.467937 / 0.540337 (-0.072401) | 0.616078 / 1.386936 (-0.770858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a888bc94dc6bce7815e3061a28e718097f4b8b9e \"CML watermark\")\n" ]
2023-07-27T14:22:18
2023-07-28T11:09:54
2023-07-28T11:01:04
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6081", "html_url": "https://github.com/huggingface/datasets/pull/6081", "diff_url": "https://github.com/huggingface/datasets/pull/6081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6081.patch", "merged_at": "2023-07-28T11:01:04" }
Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) tutorial (on which this method is based) to write TFRecord files instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6081/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6080/comments
https://api.github.com/repos/huggingface/datasets/issues/6080/events
https://github.com/huggingface/datasets/pull/6080
1,822,667,554
PR_kwDODunzps5WdL4K
6,080
Remove README link to deprecated Colab notebook
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006458 / 0.011353 (-0.004894) | 0.003895 / 0.011008 (-0.007114) | 0.084280 / 0.038508 (0.045772) | 0.071304 / 0.023109 (0.048195) | 0.313910 / 0.275898 (0.038012) | 0.344070 / 0.323480 (0.020590) | 0.005413 / 0.007986 (-0.002573) | 0.003308 / 0.004328 (-0.001021) | 0.064570 / 0.004250 (0.060320) | 0.056824 / 0.037052 (0.019771) | 0.321102 / 0.258489 (0.062613) | 0.355834 / 0.293841 (0.061993) | 0.031252 / 0.128546 (-0.097294) | 0.008427 / 0.075646 (-0.067219) | 0.287348 / 0.419271 (-0.131924) | 0.053261 / 0.043533 (0.009728) | 0.324892 / 0.255139 (0.069753) | 0.335847 / 0.283200 (0.052647) | 0.023453 / 0.141683 (-0.118230) | 1.485456 / 1.452155 (0.033301) | 1.531329 / 1.492716 (0.038612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201924 / 0.018006 (0.183918) | 0.447188 / 0.000490 (0.446698) | 0.005543 / 0.000200 (0.005343) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027586 / 0.037411 (-0.009825) | 0.082412 / 0.014526 (0.067886) | 0.094851 / 0.176557 (-0.081706) | 0.151331 / 0.737135 (-0.585804) | 0.094475 / 0.296338 (-0.201863) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399004 / 0.215209 (0.183795) | 3.974652 / 2.077655 (1.896997) | 1.991909 / 1.504120 (0.487789) | 1.811684 / 1.541195 (0.270489) | 1.869774 / 1.468490 (0.401283) | 0.487745 / 4.584777 (-4.097032) | 3.558945 / 3.745712 (-0.186768) | 5.530468 / 5.269862 (0.260606) | 3.293147 / 4.565676 (-1.272529) | 0.057531 / 0.424275 (-0.366744) | 0.007212 / 0.007607 (-0.000395) | 0.470325 / 0.226044 (0.244281) | 4.701652 / 2.268929 (2.432723) | 2.453020 / 55.444624 (-52.991605) | 2.110152 / 6.876477 (-4.766325) | 2.314669 / 2.142072 (0.172597) | 0.615039 / 4.805227 (-4.190189) | 0.133229 / 6.500664 (-6.367435) | 0.060821 / 0.075469 (-0.014648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296708 / 1.841788 (-0.545079) | 18.717251 / 8.074308 (10.642943) | 14.325305 / 10.191392 (4.133913) | 0.147680 / 0.680424 (-0.532744) | 0.018312 / 0.534201 (-0.515889) | 0.392766 / 0.579283 (-0.186517) | 0.403319 / 0.434364 (-0.031045) | 0.453696 / 0.540337 (-0.086641) | 0.622564 / 1.386936 (-0.764372) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004018 / 0.011008 (-0.006991) | 0.064436 / 0.038508 (0.025928) | 0.072365 / 0.023109 (0.049256) | 0.387532 / 0.275898 (0.111634) | 0.418175 / 0.323480 (0.094695) | 0.005453 / 0.007986 (-0.002533) | 0.003368 / 0.004328 (-0.000961) | 0.064896 / 0.004250 (0.060645) | 0.057018 / 0.037052 (0.019966) | 0.406596 / 0.258489 (0.148107) | 0.431194 / 0.293841 (0.137353) | 0.031788 / 0.128546 (-0.096759) | 0.008532 / 0.075646 (-0.067114) | 0.070605 / 0.419271 (-0.348666) | 0.053317 / 0.043533 (0.009785) | 0.391930 / 0.255139 (0.136791) | 0.406071 / 0.283200 (0.122872) | 0.028652 / 0.141683 (-0.113030) | 1.487677 / 1.452155 (0.035522) | 1.546071 / 1.492716 (0.053355) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220063 / 0.018006 (0.202056) | 0.441111 / 0.000490 (0.440621) | 0.006066 / 0.000200 (0.005867) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.096745 / 0.014526 (0.082219) | 0.108171 / 0.176557 (-0.068386) | 0.164590 / 0.737135 (-0.572545) | 0.109425 / 0.296338 (-0.186913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408101 / 0.215209 (0.192892) | 4.062961 / 2.077655 (1.985306) | 2.101849 / 1.504120 (0.597730) | 1.935919 / 1.541195 (0.394724) | 1.993749 / 1.468490 (0.525259) | 0.487788 / 4.584777 (-4.096989) | 3.533972 / 3.745712 (-0.211740) | 3.218448 / 5.269862 (-2.051414) | 2.002322 / 4.565676 (-2.563355) | 0.057371 / 0.424275 (-0.366904) | 0.007704 / 0.007607 (0.000097) | 0.491695 / 0.226044 (0.265650) | 4.905009 / 2.268929 (2.636080) | 2.597879 / 55.444624 (-52.846745) | 2.252086 / 6.876477 (-4.624391) | 2.434439 / 2.142072 (0.292367) | 0.583071 / 4.805227 (-4.222156) | 0.133765 / 6.500664 (-6.366899) | 0.061276 / 0.075469 (-0.014193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.403111 / 1.841788 (-0.438676) | 19.218886 / 8.074308 (11.144578) | 13.981775 / 10.191392 (3.790383) | 0.167784 / 0.680424 (-0.512640) | 0.018401 / 0.534201 (-0.515800) | 0.392038 / 0.579283 (-0.187245) | 0.414776 / 0.434364 (-0.019587) | 0.476221 / 0.540337 (-0.064117) | 0.632724 / 1.386936 (-0.754212) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#420dbd92c42840d6c91ecf5d3560c6799ee0cca1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007595 / 0.011353 (-0.003758) | 0.004540 / 0.011008 (-0.006468) | 0.099350 / 0.038508 (0.060842) | 0.087062 / 0.023109 (0.063953) | 0.415980 / 0.275898 (0.140082) | 0.466390 / 0.323480 (0.142910) | 0.005958 / 0.007986 (-0.002027) | 0.003671 / 0.004328 (-0.000657) | 0.075714 / 0.004250 (0.071463) | 0.066062 / 0.037052 (0.029010) | 0.426527 / 0.258489 (0.168038) | 0.473282 / 0.293841 (0.179441) | 0.035669 / 0.128546 (-0.092878) | 0.009729 / 0.075646 (-0.065918) | 0.344035 / 0.419271 (-0.075237) | 0.061153 / 0.043533 (0.017620) | 0.428607 / 0.255139 (0.173468) | 0.445951 / 0.283200 (0.162752) | 0.026373 / 0.141683 (-0.115310) | 1.788725 / 1.452155 (0.336570) | 1.871055 / 1.492716 (0.378339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230606 / 0.018006 (0.212600) | 0.489835 / 0.000490 (0.489345) | 0.005669 / 0.000200 (0.005469) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032197 / 0.037411 (-0.005214) | 0.099571 / 0.014526 (0.085045) | 0.112686 / 0.176557 (-0.063871) | 0.179478 / 0.737135 (-0.557658) | 0.112670 / 0.296338 (-0.183668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449606 / 0.215209 (0.234397) | 4.503356 / 2.077655 (2.425701) | 2.190480 / 1.504120 (0.686361) | 1.986054 / 1.541195 (0.444860) | 2.071594 / 1.468490 (0.603104) | 0.566301 / 4.584777 (-4.018475) | 4.088460 / 3.745712 (0.342748) | 4.840100 / 5.269862 (-0.429761) | 2.857697 / 4.565676 (-1.707980) | 0.066718 / 0.424275 (-0.357557) | 0.008642 / 0.007607 (0.001034) | 0.539785 / 0.226044 (0.313740) | 5.383252 / 2.268929 (3.114323) | 2.878177 / 55.444624 (-52.566447) | 2.374577 / 6.876477 (-4.501899) | 2.590500 / 2.142072 (0.448428) | 0.675196 / 4.805227 (-4.130031) | 0.153544 / 6.500664 (-6.347120) | 0.070958 / 0.075469 (-0.004511) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490403 / 1.841788 (-0.351385) | 22.085740 / 8.074308 (14.011432) | 16.588093 / 10.191392 (6.396701) | 0.188598 / 0.680424 (-0.491826) | 0.021567 / 0.534201 (-0.512634) | 0.472594 / 0.579283 (-0.106689) | 0.472903 / 0.434364 (0.038539) | 0.545305 / 0.540337 (0.004968) | 0.736399 / 1.386936 (-0.650537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003718) | 0.004731 / 0.011008 (-0.006277) | 0.076482 / 0.038508 (0.037974) | 0.083666 / 0.023109 (0.060557) | 0.469596 / 0.275898 (0.193698) | 0.493068 / 0.323480 (0.169588) | 0.006014 / 0.007986 (-0.001971) | 0.003902 / 0.004328 (-0.000426) | 0.077142 / 0.004250 (0.072891) | 0.064355 / 0.037052 (0.027303) | 0.468859 / 0.258489 (0.210370) | 0.504002 / 0.293841 (0.210161) | 0.037606 / 0.128546 (-0.090940) | 0.010141 / 0.075646 (-0.065505) | 0.083790 / 0.419271 (-0.335482) | 0.060923 / 0.043533 (0.017390) | 0.464752 / 0.255139 (0.209613) | 0.500464 / 0.283200 (0.217264) | 0.031183 / 0.141683 (-0.110499) | 1.779294 / 1.452155 (0.327139) | 1.870848 / 1.492716 (0.378131) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246567 / 0.018006 (0.228560) | 0.477182 / 0.000490 (0.476693) | 0.000426 / 0.000200 (0.000226) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035377 / 0.037411 (-0.002034) | 0.106042 / 0.014526 (0.091516) | 0.119237 / 0.176557 (-0.057320) | 0.182145 / 0.737135 (-0.554991) | 0.119537 / 0.296338 (-0.176801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491352 / 0.215209 (0.276143) | 4.824220 / 2.077655 (2.746565) | 2.652039 / 1.504120 (1.147919) | 2.535310 / 1.541195 (0.994116) | 2.620009 / 1.468490 (1.151519) | 0.567865 / 4.584777 (-4.016912) | 4.158795 / 3.745712 (0.413082) | 6.042582 / 5.269862 (0.772721) | 3.957193 / 4.565676 (-0.608484) | 0.066647 / 0.424275 (-0.357628) | 0.008893 / 0.007607 (0.001285) | 0.570137 / 0.226044 (0.344093) | 5.687126 / 2.268929 (3.418198) | 3.137605 / 55.444624 (-52.307019) | 2.655979 / 6.876477 (-4.220498) | 2.893338 / 2.142072 (0.751265) | 0.698388 / 4.805227 (-4.106840) | 0.154897 / 6.500664 (-6.345767) | 0.071208 / 0.075469 (-0.004261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.619346 / 1.841788 (-0.222441) | 22.782510 / 8.074308 (14.708202) | 16.317395 / 10.191392 (6.126003) | 0.197630 / 0.680424 (-0.482794) | 0.021795 / 0.534201 (-0.512406) | 0.466982 / 0.579283 (-0.112302) | 0.468609 / 0.434364 (0.034245) | 0.574380 / 0.540337 (0.034043) | 0.759827 / 1.386936 (-0.627109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c1c5d8268ae59a0dcaea47da825e87c3f9528b4 \"CML watermark\")\n" ]
2023-07-26T15:27:49
2023-07-26T16:24:43
2023-07-26T16:14:34
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6080", "html_url": "https://github.com/huggingface/datasets/pull/6080", "diff_url": "https://github.com/huggingface/datasets/pull/6080.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6080.patch", "merged_at": "2023-07-26T16:14:34" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6080/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6079/comments
https://api.github.com/repos/huggingface/datasets/issues/6079/events
https://github.com/huggingface/datasets/issues/6079
1,822,597,471
I_kwDODunzps5soqFf
6,079
Iterating over DataLoader based on HF datasets is stuck forever
{ "login": "arindamsarkar93", "id": 5454868, "node_id": "MDQ6VXNlcjU0NTQ4Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5454868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arindamsarkar93", "html_url": "https://github.com/arindamsarkar93", "followers_url": "https://api.github.com/users/arindamsarkar93/followers", "following_url": "https://api.github.com/users/arindamsarkar93/following{/other_user}", "gists_url": "https://api.github.com/users/arindamsarkar93/gists{/gist_id}", "starred_url": "https://api.github.com/users/arindamsarkar93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arindamsarkar93/subscriptions", "organizations_url": "https://api.github.com/users/arindamsarkar93/orgs", "repos_url": "https://api.github.com/users/arindamsarkar93/repos", "events_url": "https://api.github.com/users/arindamsarkar93/events{/privacy}", "received_events_url": "https://api.github.com/users/arindamsarkar93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ", "Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self)\r\n 1350 yield formatter.format_row(pa_table)\r\n 1351 return\r\n-> 1353 for key, example in ex_iterable:\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:956, in BufferShuffledExamplesIterable.__iter__(self)\r\n 954 # this is the shuffle buffer that we keep in memory\r\n 955 mem_buffer = []\r\n--> 956 for x in self.ex_iterable:\r\n 957 if len(mem_buffer) == buffer_size: # if the buffer is full, pick and example from it\r\n 958 i = next(indices_iterator)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:296, in ShuffledDataSourcesArrowExamplesIterable.__iter__(self)\r\n 294 for key, pa_table in self.generate_tables_fn(**kwargs_with_shuffled_shards):\r\n 295 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):\r\n--> 296 formatted_batch = formatter.format_batch(pa_subtable)\r\n 297 for example in _batch_to_examples(formatted_batch):\r\n 298 yield key, example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:448, in PythonFormatter.format_batch(self, pa_table)\r\n 446 if self.lazy:\r\n 447 return LazyBatch(pa_table, self)\r\n--> 448 batch = self.python_arrow_extractor().extract_batch(pa_table)\r\n 449 batch = self.python_features_decoder.decode_batch(batch)\r\n 450 return batch\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/formatting.py:150, in PythonArrowExtractor.extract_batch(self, pa_table)\r\n 149 def extract_batch(self, pa_table: pa.Table) -> dict:\r\n--> 150 return pa_table.to_pydict()\r\n\r\nKeyboardInterrupt: \r\n```\r\n", "Update: If i let it run, it eventually fails with:\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\nCell In[16], line 4\r\n 2 t = time.time()\r\n 3 iter_ = 0\r\n----> 4 for batch in train_dataloader:\r\n 5 #batch_proc = streaming_obj.collect_streaming_data_batch(batch)\r\n 6 iter_ += 1\r\n 8 if iter_ == 1:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:634, in _BaseDataLoaderIter.__next__(self)\r\n 631 if self._sampler_iter is None:\r\n 632 # TODO(https://github.com/pytorch/pytorch/issues/76750)\r\n 633 self._reset() # type: ignore[call-arg]\r\n--> 634 data = self._next_data()\r\n 635 self._num_yielded += 1\r\n 636 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n 637 self._IterableDataset_len_called is not None and \\\r\n 638 self._num_yielded > self._IterableDataset_len_called:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/dataloader.py:678, in _SingleProcessDataLoaderIter._next_data(self)\r\n 676 def _next_data(self):\r\n 677 index = self._next_index() # may raise StopIteration\r\n--> 678 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 679 if self._pin_memory:\r\n 680 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)\r\n 30 for _ in possibly_batched_index:\r\n 31 try:\r\n---> 32 data.append(next(self.dataset_iter))\r\n 33 except StopIteration:\r\n 34 self.ended = True\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/iterable_dataset.py:1360, in IterableDataset.__iter__(self)\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n-> 1360 yield format_dict(example) if format_dict else example\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:85, in TorchFormatter.recursive_tensorize(self, data_struct)\r\n 84 def recursive_tensorize(self, data_struct: dict):\r\n---> 85 return map_nested(self._recursive_tensorize, data_struct, map_list=False)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:463, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 463 mapped = [\r\n 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:464, in <listcomp>(.0)\r\n 461 num_proc = 1\r\n 462 if num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 463 mapped = [\r\n--> 464 _single_map_nested((function, obj, types, None, True, None))\r\n 465 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 466 ]\r\n 467 else:\r\n 468 mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/utils/py_utils.py:366, in _single_map_nested(args)\r\n 364 # Singleton first to spare some computation\r\n 365 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 366 return function(data_struct)\r\n 368 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 369 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:82, in TorchFormatter._recursive_tensorize(self, data_struct)\r\n 80 elif isinstance(data_struct, (list, tuple)):\r\n 81 return self._consolidate([self.recursive_tensorize(substruct) for substruct in data_struct])\r\n---> 82 return self._tensorize(data_struct)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:68, in TorchFormatter._tensorize(self, value)\r\n 66 if isinstance(value, PIL.Image.Image):\r\n 67 value = np.asarray(value)\r\n---> 68 return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})\r\n\r\nRuntimeError: Could not infer dtype of decimal.Decimal\r\n```", "PyTorch tensors cannot store `Decimal` objects. Casting the column with decimals to `float` should fix the issue.", "I already have cast in collate_fn, in which I perform .astype(float) for each numerical field.\r\nOn the same instance, I installed a conda env with python 3.6, and this works well.\r\n\r\nSample:\r\n\r\n```\r\ndef streaming_data_collate_fn(batch):\r\n df = pd.DataFrame.from_dict(batch)\r\n feat_vals = torch.FloatTensor(np.nan_to_num(np.array(df[feats].astype(float))))\r\n\r\n```", "`collate_fn` is applied after the `torch` formatting step, so I think the only option when working with an `IterableDataset` is to remove the `with_format` call and perform the conversion from Python values to PyTorch tensors in `collate_fn`. The standard `Dataset` supports `with_format(\"numpy\")`, which should make this conversion faster.", "Thanks! \r\nPython 3.10 conda-env: After replacing with_format(\"torch\") with with_format(\"numpy\"), the error went away. However, it was still taking over 2 minutes to load a very small batch of 64 samples with num_workers set to 32. Once I removed with_format call altogether, it is finishing in 11 seconds.\r\n\r\nPython 3.6 based conda-env: When I switch the kernel , neither of the above work, and with_format(\"torch\") is the only thing that works, and executes in 1.6 seconds.\r\n\r\nI feel something else is also amiss here.", "Can you share the `datasets` and `torch` versions installed in these conda envs?\r\n\r\n> Once I removed with_format call altogether, it is finishing in 11 seconds.\r\n\r\nHmm, that's surprising. What are your dataset's `.features`?", "Python 3.6: \r\ndatasets.__version__ 2.4.0\r\ntorch.__version__ 1.10.1+cu102\r\n\r\nPython 3.10:\r\ndatasets.__version__ 2.14.0\r\ntorch.__version__ 2.0.0\r\n\r\nAnonymized features are of the form (subset shown here):\r\n{\r\n'string_feature_i': Value(dtype='string', id=None),\r\n'numerical_feature_i': Value(dtype='decimal128(38, 0)', id=None),\r\n'numerical_feature_series_i': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None),\r\n}\r\n\r\n\r\nThere is no output from .features in python 3.6 kernel BTW.", "One more thing, in python 3.10 based kernel, interestingly increasing num_workers seem to be increasing the runtime of iterating I was trying out. In python 3.10 kernel execution, I do not even see multiple CPU cores spiking unlike in 3.6.\r\n\r\n512 batch size on 32 workers executes in 2.4 seconds on python 3.6 kernel, while it takes ~118 seconds on 3.10!", "**Update**: It seems the latency part is more of a multiprocessing issue with torch and some host specific issue, and I had to scourge through relevant pytorch issues, when I stumbled across these threads:\r\n1. https://github.com/pytorch/pytorch/issues/102494\r\n2. https://github.com/pytorch/pytorch/issues/102269\r\n3. https://github.com/pytorch/pytorch/issues/99625\r\n\r\nOut of the suggested solutions, the one that worked in my case was:\r\n```\r\nos.environ['KMP_AFFINITY'] = \"disabled\"\r\n```\r\nIt is working for now, though I have no clue why, just I hope it does not get stuck when I do actual model training, will update by tomorrow.\r\n\r\n\r\n", "I'm facing a similar situation in the local VS Code. \r\n\r\nDatasets version 2.14.4\r\nTorch 2.0.1+cu118\r\n\r\nSame code runs without issues in Colab\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"Supermaxman/esa-hubble\", streaming=True)\r\nsample = next(iter(dataset[\"train\"]))\r\n```\r\n\r\nis stuck for minutes. If I interrupt, I get\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[5], line 5\r\n 1 from datasets import load_dataset\r\n 3 dataset = load_dataset(\"Supermaxman/esa-hubble\", streaming=True)\r\n----> 5 sample = next(iter(dataset[\"train\"]))\r\n 6 print(sample[\"text\"])\r\n 7 sample[\"image\"]\r\n\r\nFile [~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353](https://file+.vscode-resource.vscode-cdn.net/home/osanseviero/Desktop/workspace/genai/nbs/~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:1353), in IterableDataset.__iter__(self)\r\n 1350 yield formatter.format_row(pa_table)\r\n 1351 return\r\n-> 1353 for key, example in ex_iterable:\r\n 1354 if self.features:\r\n 1355 # `IterableDataset` automatically fills missing columns with None.\r\n 1356 # This is done with `_apply_feature_types_on_example`.\r\n 1357 example = _apply_feature_types_on_example(\r\n 1358 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 1359 )\r\n\r\nFile [~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:255](https://file+.vscode-resource.vscode-cdn.net/home/osanseviero/Desktop/workspace/genai/nbs/~/miniconda3/envs/book/lib/python3.10/site-packages/datasets/iterable_dataset.py:255), in ArrowExamplesIterable.__iter__(self)\r\n 253 def __iter__(self):\r\n 254 formatter = PythonFormatter()\r\n--> 255 for key, pa_table in self.generate_tables_fn(**self.kwargs):\r\n 256 for pa_subtable in pa_table.to_reader(max_chunksize=config.ARROW_READER_BATCH_SIZE_IN_DATASET_ITER):\r\n...\r\n-> 1130 return self._sslobj.read(len, buffer)\r\n 1131 else:\r\n 1132 return self._sslobj.read(len)\r\n```", "@osanseviero I assume the `self._sslobj.read(len, buffer)` line comes from the built-in `ssl` module, so this probably has something to do with your network. Please open a new issue with the full stack trace in case you haven't resolved this yet." ]
2023-07-26T14:52:37
2023-09-19T21:52:22
2023-07-30T14:09:06
NONE
null
null
null
### Describe the bug I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment. I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What should be my next steps here? ### Steps to reproduce the bug ``` train_dataset = load_dataset( "parquet", data_files = {'train': tr_data_path + '*.parquet'}, split = 'train', collate_fn = streaming_data_collate_fn, streaming = True ).with_format('torch') train_dataloader = DataLoader(train_dataset, batch_size = 2, num_workers = 0) t = time.time() iter_ = 0 for batch in train_dataloader: iter_ += 1 if iter_ == 1000: break print (time.time() - t) ``` ### Expected behavior The snippet should work normally and load the next batch of data. ### Environment info datasets: '2.14.0' pyarrow: '12.0.0' torch: '2.0.0' Python: 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] !uname -r 5.10.178-162.673.amzn2.x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6079/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6079/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6078/comments
https://api.github.com/repos/huggingface/datasets/issues/6078/events
https://github.com/huggingface/datasets/issues/6078
1,822,501,472
I_kwDODunzps5soSpg
6,078
resume_download with streaming=True
{ "login": "NicolasMICAUX", "id": 72763959, "node_id": "MDQ6VXNlcjcyNzYzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/72763959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NicolasMICAUX", "html_url": "https://github.com/NicolasMICAUX", "followers_url": "https://api.github.com/users/NicolasMICAUX/followers", "following_url": "https://api.github.com/users/NicolasMICAUX/following{/other_user}", "gists_url": "https://api.github.com/users/NicolasMICAUX/gists{/gist_id}", "starred_url": "https://api.github.com/users/NicolasMICAUX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NicolasMICAUX/subscriptions", "organizations_url": "https://api.github.com/users/NicolasMICAUX/orgs", "repos_url": "https://api.github.com/users/NicolasMICAUX/repos", "events_url": "https://api.github.com/users/NicolasMICAUX/events{/privacy}", "received_events_url": "https://api.github.com/users/NicolasMICAUX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ", "Ok thank you for your answer", "I'm closing this as a duplicate of #5380" ]
2023-07-26T14:08:22
2023-07-28T11:05:03
2023-07-28T11:05:03
NONE
null
null
null
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download from step 1_000_´000 without re-streaming all the first 1 million docs of the dataset? `download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True. ### Steps to reproduce the bug ``` from datasets import load_dataset, DownloadConfig dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, # optional split="train", download_config=DownloadConfig(resume_download=True) ) # interupt the run and try to relaunch it => this restart from scratch ``` ### Expected behavior I would expect a parameter to start streaming from a given index in the dataset. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6078/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6077/comments
https://api.github.com/repos/huggingface/datasets/issues/6077/events
https://github.com/huggingface/datasets/issues/6077
1,822,486,810
I_kwDODunzps5soPEa
6,077
Mapping gets stuck at 99%
{ "login": "Laurent2916", "id": 21087104, "node_id": "MDQ6VXNlcjIxMDg3MTA0", "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Laurent2916", "html_url": "https://github.com/Laurent2916", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "repos_url": "https://api.github.com/users/Laurent2916/repos", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The `MAX_MAP_BATCH_SIZE = 1_000_000_000` hack is bad as it loads the entire dataset into RAM when performing `.map`. Instead, it's best to use `.iter(batch_size)` to iterate over the data batches and compute `mean` for each column. (`stddev` can be computed in another pass).\r\n\r\nAlso, these arrays are big, so it makes sense to reduce `batch_size`/`writer_batch_size` to avoid RAM issues and slow IO.", "Hi @mariosasko !\r\n\r\nI agree, it's an ugly hack, but it was convenient since the resulting `mean_std` could be cached by the library. For my large dataset (which doesn't fit in RAM), I'm actually using something similar to what you suggested. I got rid of the first mapping in the above scripts and replaced it with an iterator, but the issue with the second mapping still persists.", "Have you tried to reduce `batch_size`/`writer_batch_size` in the 2nd `.map`? Also, can you interrupt the process when it gets stuck and share the error stack trace?", "I think `batch_size/writer_batch_size` is already at its lowest in the 2nd `.map` since `batched=False` implies `batch_size=1` and `len(ds) = 1000 = writer_batch_size`.\r\n\r\nHere is also a bunch of stack traces when I interrupted the process:\r\n\r\n<details>\r\n <summary>stack trace 1</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 97%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 967/1000 [00:01<00:00, 534.87 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 263, in _cast_to_python_objects\r\n def _cast_to_python_objects(obj: Any, only_1d_for_numpy: bool, optimize_list_casting: bool) -> Tuple[Any, bool]:\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in <listcomp>\r\n [\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n\r\n<details>\r\n <summary>stack trace 2</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 99%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 988/1000 [00:20<00:00, 526.19 examples/s]Applying mean/std: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊| 999/1000 [00:21<00:00, 9.66 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 263, in _cast_to_python_objects\r\n def _cast_to_python_objects(obj: Any, only_1d_for_numpy: bool, optimize_list_casting: bool) -> Tuple[Any, bool]:\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 291, in _cast_to_python_objects\r\n if config.JAX_AVAILABLE and \"jax\" in sys.modules:\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n\r\n<details>\r\n <summary>stack trace 3</summary>\r\n\r\n```python\r\n(pyg)[d623204@rosetta-bigviz01 stage-laurent-f]$ python src/random_scripts/uses_random_data.py \r\nFound cached dataset random_data (/local_scratch/lfainsin/.cache/huggingface/datasets/random_data/default/0.0.0/444e214e1d0e6298cfd3f2368323ec37073dc1439f618e19395b1f421c69b066)\r\nApplying mean/std: 99%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 989/1000 [00:01<00:00, 504.80 examples/s]Traceback (most recent call last): \r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3449, in _map_single\r\n writer.write(example)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 490, in write\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 179, in __arrow_array__\r\n storage = to_pyarrow_listarray(data, pa_type)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 1466, in to_pyarrow_listarray\r\n return pa.array(data, pa_type.storage_dtype)\r\n File \"pyarrow/array.pxi\", line 320, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 123, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Could not convert tensor([[-1.0273, -0.8037, -0.6860],\r\n [-0.5034, -1.2685, -0.0558],\r\n [-1.0908, -1.1820, -0.3178],\r\n ...,\r\n [-0.8171, 0.1781, -0.5903],\r\n [ 0.4370, 1.9305, 0.5899],\r\n [-0.1426, 0.9053, -1.7559]]) with type Tensor: was not a sequence or recognized null for conversion to list type\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/gpfs_new/data/users/lfainsin/stage-laurent-f/src/random_scripts/uses_random_data.py\", line 62, in <module>\r\n ds_normalized = ds.map(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 580, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 545, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3087, in map\r\n for rank, done, content in Dataset._map_single(**dataset_kwargs):\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 3492, in _map_single\r\n writer.finalize()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 584, in finalize\r\n self.write_examples_on_file()\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 448, in write_examples_on_file\r\n self.write_batch(batch_examples=batch_examples)\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 553, in write_batch\r\n arrays.append(pa.array(typed_sequence))\r\n File \"pyarrow/array.pxi\", line 236, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/arrow_writer.py\", line 223, in __arrow_array__\r\n return pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 446, in cast_to_python_objects\r\n return _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 407, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 408, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 319, in _cast_to_python_objects\r\n [\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 320, in <listcomp>\r\n _cast_to_python_objects(\r\n File \"/local_scratch/lfainsin/.conda/envs/pyg/lib/python3.10/site-packages/datasets/features/features.py\", line 298, in _cast_to_python_objects\r\n if obj.ndim == 0:\r\nKeyboardInterrupt\r\n```\r\n\r\n</details>\r\n" ]
2023-07-26T14:00:40
2023-07-28T09:21:07
null
CONTRIBUTOR
null
null
null
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, meaning I need to compute the mean and standard deviation metric for each feature of the entire dataset. I cannot load the entire dataset to RAM as it is too big, so following [this discussion on the huggingface discourse](https://discuss.huggingface.co/t/copy-columns-in-a-dataset-and-compute-statistics-for-a-column/22157) I am using a [map operation](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/main_classes#datasets.Dataset.map) to first compute the metrics and a second map operation to apply them on the dataset. The problem lies in the second mapping, as it gets stuck at ~99%. By checking what the process does (using `htop` and `strace`) it seems to be doing a lot of I/O operations, and I'm not sure why. Obviously, I could always normalize the dataset externally and then load it using a loading script. However, since the internal dataset is updated fairly frequently, using the library to perform normalization automatically would make it much easier for me. ### Steps to reproduce the bug I'm able to reproduce the problem using the following scripts: ```python # random_data.py import datasets import torch _VERSION = "1.0.0" class RandomDataset(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo( version=_VERSION, supervised_keys=None, features=datasets.Features( { "positions": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "normals": datasets.Array2D( shape=(30000, 3), dtype="float32", ), "features": datasets.Array2D( shape=(30000, 6), dtype="float32", ), "scalars": datasets.Sequence( feature=datasets.Value("float32"), length=20, ), }, ), ) def _split_generators(self, dl_manager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, # type: ignore gen_kwargs={"nb_samples": 1000}, ), datasets.SplitGenerator( name=datasets.Split.TEST, # type: ignore gen_kwargs={"nb_samples": 100}, ), ] def _generate_examples(self, nb_samples: int): for idx in range(nb_samples): yield idx, { "positions": torch.randn(30000, 3), "normals": torch.randn(30000, 3), "features": torch.randn(30000, 6), "scalars": torch.randn(20), } ``` ```python # main.py import datasets import torch def apply_mean_std( dataset: datasets.Dataset, means: dict[str, torch.Tensor], stds: dict[str, torch.Tensor], ) -> dict[str, torch.Tensor]: """Normalize the dataset using the mean and standard deviation of each feature. Args: dataset (`Dataset`): A huggingface dataset. mean (`dict[str, Tensor]`): A dictionary containing the mean of each feature. std (`dict[str, Tensor]`): A dictionary containing the standard deviation of each feature. Returns: dict: A dictionary containing the normalized dataset. """ result = {} for key in means.keys(): # extract data from dataset data: torch.Tensor = dataset[key] # type: ignore # extract mean and std from dict mean = means[key] # type: ignore std = stds[key] # type: ignore # normalize data normalized_data = (data - mean) / std result[key] = normalized_data return result # get dataset ds = datasets.load_dataset( path="random_data.py", split="train", ).with_format("torch") # compute mean (along last axis) means = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names} means_sq = {key: torch.zeros(ds[key][0].shape[-1]) for key in ds.column_names} for batch in ds.iter(batch_size=8): for key in ds.column_names: data = batch[key] batch_size = data.shape[0] data = data.reshape(-1, data.shape[-1]) means[key] += data.mean(dim=0) / len(ds) * batch_size means_sq[key] += (data**2).mean(dim=0) / len(ds) * batch_size # compute std (along last axis) stds = {key: torch.sqrt(means_sq[key] - means[key] ** 2) for key in ds.column_names} # normalize each feature of the dataset ds_normalized = ds.map( desc="Applying mean/std", # type: ignore function=apply_mean_std, batched=False, fn_kwargs={ "means": means, "stds": stds, }, ) ``` ### Expected behavior Using the previous scripts, the `ds_normalized` mapping completes in ~5 minutes, but any subsequent use of `ds_normalized` is really really slow, for example reapplying `apply_mean_std` to `ds_normalized` takes forever. This is very strange, I'm sure I must be missing something, but I would still expect this to be faster. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.66.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6077/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6076/comments
https://api.github.com/repos/huggingface/datasets/issues/6076/events
https://github.com/huggingface/datasets/pull/6076
1,822,345,597
PR_kwDODunzps5WcGVR
6,076
No gzip encoding from github
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008191 / 0.011353 (-0.003162) | 0.004669 / 0.011008 (-0.006339) | 0.101315 / 0.038508 (0.062807) | 0.090235 / 0.023109 (0.067126) | 0.381265 / 0.275898 (0.105367) | 0.418266 / 0.323480 (0.094786) | 0.006292 / 0.007986 (-0.001693) | 0.003979 / 0.004328 (-0.000349) | 0.075946 / 0.004250 (0.071696) | 0.070678 / 0.037052 (0.033625) | 0.378006 / 0.258489 (0.119517) | 0.425825 / 0.293841 (0.131984) | 0.036325 / 0.128546 (-0.092221) | 0.009814 / 0.075646 (-0.065832) | 0.345687 / 0.419271 (-0.073584) | 0.063846 / 0.043533 (0.020313) | 0.386003 / 0.255139 (0.130864) | 0.400875 / 0.283200 (0.117675) | 0.027806 / 0.141683 (-0.113877) | 1.814810 / 1.452155 (0.362655) | 1.879897 / 1.492716 (0.387180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218684 / 0.018006 (0.200677) | 0.501715 / 0.000490 (0.501225) | 0.004808 / 0.000200 (0.004608) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035494 / 0.037411 (-0.001917) | 0.100949 / 0.014526 (0.086423) | 0.114639 / 0.176557 (-0.061917) | 0.188908 / 0.737135 (-0.548227) | 0.115794 / 0.296338 (-0.180545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462537 / 0.215209 (0.247328) | 4.612469 / 2.077655 (2.534814) | 2.298065 / 1.504120 (0.793945) | 2.088738 / 1.541195 (0.547543) | 2.188072 / 1.468490 (0.719582) | 0.565412 / 4.584777 (-4.019364) | 4.180394 / 3.745712 (0.434681) | 3.848696 / 5.269862 (-1.421165) | 2.391381 / 4.565676 (-2.174296) | 0.067647 / 0.424275 (-0.356628) | 0.008847 / 0.007607 (0.001240) | 0.553288 / 0.226044 (0.327243) | 5.517962 / 2.268929 (3.249033) | 2.866622 / 55.444624 (-52.578002) | 2.439025 / 6.876477 (-4.437452) | 2.740156 / 2.142072 (0.598084) | 0.694796 / 4.805227 (-4.110431) | 0.159022 / 6.500664 (-6.341642) | 0.074471 / 0.075469 (-0.000998) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.534979 / 1.841788 (-0.306808) | 23.297273 / 8.074308 (15.222965) | 16.859178 / 10.191392 (6.667786) | 0.207594 / 0.680424 (-0.472830) | 0.021990 / 0.534201 (-0.512211) | 0.472059 / 0.579283 (-0.107224) | 0.497632 / 0.434364 (0.063268) | 0.565672 / 0.540337 (0.025335) | 0.772485 / 1.386936 (-0.614451) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007777 / 0.011353 (-0.003576) | 0.004679 / 0.011008 (-0.006329) | 0.077317 / 0.038508 (0.038809) | 0.087433 / 0.023109 (0.064324) | 0.437389 / 0.275898 (0.161491) | 0.479562 / 0.323480 (0.156082) | 0.006137 / 0.007986 (-0.001849) | 0.003938 / 0.004328 (-0.000390) | 0.074769 / 0.004250 (0.070518) | 0.066605 / 0.037052 (0.029553) | 0.454865 / 0.258489 (0.196376) | 0.485103 / 0.293841 (0.191262) | 0.036540 / 0.128546 (-0.092006) | 0.009983 / 0.075646 (-0.065664) | 0.083566 / 0.419271 (-0.335706) | 0.059527 / 0.043533 (0.015994) | 0.449154 / 0.255139 (0.194015) | 0.462542 / 0.283200 (0.179342) | 0.027581 / 0.141683 (-0.114102) | 1.776720 / 1.452155 (0.324565) | 1.847920 / 1.492716 (0.355204) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246792 / 0.018006 (0.228786) | 0.494513 / 0.000490 (0.494024) | 0.004376 / 0.000200 (0.004176) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037837 / 0.037411 (0.000426) | 0.112752 / 0.014526 (0.098226) | 0.121742 / 0.176557 (-0.054815) | 0.189365 / 0.737135 (-0.547770) | 0.124366 / 0.296338 (-0.171973) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492890 / 0.215209 (0.277681) | 4.920270 / 2.077655 (2.842615) | 2.565350 / 1.504120 (1.061230) | 2.378679 / 1.541195 (0.837484) | 2.483794 / 1.468490 (1.015304) | 0.579623 / 4.584777 (-4.005154) | 4.195924 / 3.745712 (0.450212) | 3.903382 / 5.269862 (-1.366479) | 2.466884 / 4.565676 (-2.098793) | 0.064145 / 0.424275 (-0.360130) | 0.008695 / 0.007607 (0.001088) | 0.579300 / 0.226044 (0.353256) | 5.809064 / 2.268929 (3.540136) | 3.145393 / 55.444624 (-52.299232) | 2.832760 / 6.876477 (-4.043717) | 3.020460 / 2.142072 (0.878388) | 0.700235 / 4.805227 (-4.104992) | 0.161262 / 6.500664 (-6.339402) | 0.076484 / 0.075469 (0.001015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.606504 / 1.841788 (-0.235284) | 23.747863 / 8.074308 (15.673555) | 17.281712 / 10.191392 (7.090320) | 0.203874 / 0.680424 (-0.476550) | 0.021839 / 0.534201 (-0.512362) | 0.472365 / 0.579283 (-0.106918) | 0.475150 / 0.434364 (0.040786) | 0.571713 / 0.540337 (0.031376) | 0.759210 / 1.386936 (-0.627726) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c3a7fc003b1d181d8e8ece24d5ebd442ec5d6519 \"CML watermark\")\n", "> Some questions: won't this have an impact on downloading time, once we do not longer compress the payload? What is the advantage of this approach over the one with block_size: 0?\r\n\r\nSurely, but this prevents random access which is needed at multiple places in the code (eg to check the compression type).\r\nGithub isn't a good place for big files anyway so we should be fine" ]
2023-07-26T12:46:07
2023-07-27T16:15:11
2023-07-27T16:14:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6076", "html_url": "https://github.com/huggingface/datasets/pull/6076", "diff_url": "https://github.com/huggingface/datasets/pull/6076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6076.patch", "merged_at": "2023-07-27T16:14:40" }
Don't accept gzip encoding from github, otherwise some files are not streamable + seekable. fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84 and making sure https://github.com/huggingface/datasets/issues/2918 works as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6076/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6075/comments
https://api.github.com/repos/huggingface/datasets/issues/6075/events
https://github.com/huggingface/datasets/issues/6075
1,822,341,398
I_kwDODunzps5snrkW
6,075
Error loading music files using `load_dataset`
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.", "I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!" ]
2023-07-26T12:44:05
2023-07-26T13:08:08
2023-07-26T13:08:08
NONE
null
null
null
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem formatted_output = format_table( File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table return formatter(pa_table, query_type=query_type) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__ return self.format_column(pa_table) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column return self.features.decode_column(column, column_name) if self.features else column File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read with SoundFile(file, 'r', samplerate, channels, File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__ self._file = self._open(file, mode_int, closefd) File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open _error_check(_snd.sf_error(file_ptr), File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised. ``` ### Steps to reproduce the bug Code to reproduce the error - ```python from datasets import load_dataset ds = load_dataset("susnato/pop2piano_real_music_test", split="test") print(ds[0]) ``` ### Expected behavior I should be able to read the music file without any error. ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6075/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006616 / 0.011353 (-0.004737) | 0.003915 / 0.011008 (-0.007093) | 0.083271 / 0.038508 (0.044763) | 0.072595 / 0.023109 (0.049485) | 0.307224 / 0.275898 (0.031326) | 0.337244 / 0.323480 (0.013764) | 0.005296 / 0.007986 (-0.002690) | 0.003325 / 0.004328 (-0.001003) | 0.064589 / 0.004250 (0.060339) | 0.056369 / 0.037052 (0.019316) | 0.310829 / 0.258489 (0.052340) | 0.345563 / 0.293841 (0.051722) | 0.030551 / 0.128546 (-0.097995) | 0.008519 / 0.075646 (-0.067127) | 0.286368 / 0.419271 (-0.132903) | 0.052498 / 0.043533 (0.008966) | 0.308735 / 0.255139 (0.053596) | 0.329234 / 0.283200 (0.046034) | 0.022588 / 0.141683 (-0.119095) | 1.453135 / 1.452155 (0.000981) | 1.525956 / 1.492716 (0.033239) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181410) | 0.454621 / 0.000490 (0.454131) | 0.004928 / 0.000200 (0.004728) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028436 / 0.037411 (-0.008975) | 0.083722 / 0.014526 (0.069196) | 0.095162 / 0.176557 (-0.081395) | 0.153434 / 0.737135 (-0.583702) | 0.099480 / 0.296338 (-0.196859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384647 / 0.215209 (0.169438) | 3.838406 / 2.077655 (1.760751) | 1.891267 / 1.504120 (0.387148) | 1.751432 / 1.541195 (0.210238) | 1.737443 / 1.468490 (0.268953) | 0.487758 / 4.584777 (-4.097019) | 3.635925 / 3.745712 (-0.109787) | 5.208718 / 5.269862 (-0.061144) | 3.029374 / 4.565676 (-1.536302) | 0.057613 / 0.424275 (-0.366662) | 0.007177 / 0.007607 (-0.000430) | 0.455596 / 0.226044 (0.229552) | 4.559969 / 2.268929 (2.291040) | 2.325321 / 55.444624 (-53.119303) | 2.034924 / 6.876477 (-4.841552) | 2.163869 / 2.142072 (0.021796) | 0.583477 / 4.805227 (-4.221750) | 0.132870 / 6.500664 (-6.367795) | 0.059618 / 0.075469 (-0.015851) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263751 / 1.841788 (-0.578037) | 19.740004 / 8.074308 (11.665696) | 14.410980 / 10.191392 (4.219588) | 0.170367 / 0.680424 (-0.510057) | 0.018225 / 0.534201 (-0.515976) | 0.390101 / 0.579283 (-0.189182) | 0.404298 / 0.434364 (-0.030066) | 0.455295 / 0.540337 (-0.085043) | 0.621179 / 1.386936 (-0.765757) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006580 / 0.011353 (-0.004773) | 0.004078 / 0.011008 (-0.006930) | 0.065842 / 0.038508 (0.027334) | 0.074494 / 0.023109 (0.051385) | 0.403644 / 0.275898 (0.127746) | 0.430204 / 0.323480 (0.106724) | 0.005343 / 0.007986 (-0.002643) | 0.003366 / 0.004328 (-0.000963) | 0.064858 / 0.004250 (0.060607) | 0.056252 / 0.037052 (0.019200) | 0.412556 / 0.258489 (0.154067) | 0.434099 / 0.293841 (0.140258) | 0.031518 / 0.128546 (-0.097028) | 0.008543 / 0.075646 (-0.067104) | 0.071658 / 0.419271 (-0.347613) | 0.049962 / 0.043533 (0.006430) | 0.398511 / 0.255139 (0.143372) | 0.415908 / 0.283200 (0.132708) | 0.025011 / 0.141683 (-0.116672) | 1.492350 / 1.452155 (0.040195) | 1.552996 / 1.492716 (0.060280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204971 / 0.018006 (0.186964) | 0.439965 / 0.000490 (0.439475) | 0.002071 / 0.000200 (0.001872) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031673 / 0.037411 (-0.005738) | 0.087529 / 0.014526 (0.073004) | 0.099882 / 0.176557 (-0.076675) | 0.156994 / 0.737135 (-0.580141) | 0.101421 / 0.296338 (-0.194918) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407480 / 0.215209 (0.192271) | 4.069123 / 2.077655 (1.991468) | 2.081288 / 1.504120 (0.577169) | 1.920367 / 1.541195 (0.379172) | 1.981053 / 1.468490 (0.512563) | 0.481995 / 4.584777 (-4.102782) | 3.546486 / 3.745712 (-0.199226) | 5.133150 / 5.269862 (-0.136712) | 3.056444 / 4.565676 (-1.509232) | 0.056650 / 0.424275 (-0.367625) | 0.007746 / 0.007607 (0.000139) | 0.490891 / 0.226044 (0.264847) | 4.902160 / 2.268929 (2.633232) | 2.564726 / 55.444624 (-52.879899) | 2.234988 / 6.876477 (-4.641489) | 2.387656 / 2.142072 (0.245583) | 0.576315 / 4.805227 (-4.228912) | 0.132065 / 6.500664 (-6.368599) | 0.060728 / 0.075469 (-0.014741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.370568 / 1.841788 (-0.471220) | 19.883159 / 8.074308 (11.808851) | 14.442066 / 10.191392 (4.250674) | 0.150119 / 0.680424 (-0.530305) | 0.018359 / 0.534201 (-0.515842) | 0.394128 / 0.579283 (-0.185155) | 0.411697 / 0.434364 (-0.022667) | 0.460580 / 0.540337 (-0.079757) | 0.608490 / 1.386936 (-0.778446) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#035d0cf842b82b14059999baa78e8d158dfbed12 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "merging now if you don't mind - this way I can make a patch release" ]
2023-07-26T12:20:54
2023-07-27T16:16:28
2023-07-27T16:16:02
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074", "html_url": "https://github.com/huggingface/datasets/pull/6074", "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "merged_at": "2023-07-27T16:16:02" }
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6073/comments
https://api.github.com/repos/huggingface/datasets/issues/6073/events
https://github.com/huggingface/datasets/issues/6073
1,822,167,804
I_kwDODunzps5snBL8
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
{ "login": "BUAAChuanWang", "id": 45893496, "node_id": "MDQ6VXNlcjQ1ODkzNDk2", "avatar_url": "https://avatars.githubusercontent.com/u/45893496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BUAAChuanWang", "html_url": "https://github.com/BUAAChuanWang", "followers_url": "https://api.github.com/users/BUAAChuanWang/followers", "following_url": "https://api.github.com/users/BUAAChuanWang/following{/other_user}", "gists_url": "https://api.github.com/users/BUAAChuanWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/BUAAChuanWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BUAAChuanWang/subscriptions", "organizations_url": "https://api.github.com/users/BUAAChuanWang/orgs", "repos_url": "https://api.github.com/users/BUAAChuanWang/repos", "events_url": "https://api.github.com/users/BUAAChuanWang/events{/privacy}", "received_events_url": "https://api.github.com/users/BUAAChuanWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)." ]
2023-07-26T11:09:31
2023-08-29T15:53:59
2023-08-29T15:53:59
NONE
null
null
null
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2 So there maybe a bug in path join. Here is the whole bug report: /x/datasets/loa │ │ d.py:1656 in load_dataset │ │ │ │ 1653 │ ignore_verifications = ignore_verifications or save_infos │ │ 1654 │ │ │ 1655 │ # Create a dataset builder │ │ ❱ 1656 │ builder_instance = load_dataset_builder( │ │ 1657 │ │ path=path, │ │ 1658 │ │ name=name, │ │ 1659 │ │ data_dir=data_dir, │ │ │ │ x/datasets/loa │ │ d.py:1439 in load_dataset_builder │ │ │ │ 1436 │ if use_auth_token is not None: │ │ 1437 │ │ download_config = download_config.copy() if download_config e │ │ 1438 │ │ download_config.use_auth_token = use_auth_token │ │ ❱ 1439 │ dataset_module = dataset_module_factory( │ │ 1440 │ │ path, │ │ 1441 │ │ revision=revision, │ │ 1442 │ │ download_config=download_config, │ │ │ │ x/datasets/loa │ │ d.py:1097 in dataset_module_factory │ │ │ │ 1094 │ │ │ 1095 │ # Try packaged │ │ 1096 │ if path in _PACKAGED_DATASETS_MODULES: │ │ ❱ 1097 │ │ return PackagedDatasetModuleFactory( │ │ 1098 │ │ │ path, │ │ 1099 │ │ │ data_dir=data_dir, │ │ 1100 │ │ │ data_files=data_files, │ │ │ │x/datasets/loa │ │ d.py:743 in get_module │ │ │ │ 740 │ │ │ if self.data_dir is not None │ │ 741 │ │ │ else get_patterns_locally(str(Path().resolve())) │ │ 742 │ │ ) │ │ ❱ 743 │ │ data_files = DataFilesDict.from_local_or_remote( │ │ 744 │ │ │ patterns, │ │ 745 │ │ │ use_auth_token=self.download_config.use_auth_token, │ │ 746 │ │ │ base_path=str(Path(self.data_dir).resolve()) if self.data │ │ │ │ x/datasets/dat │ │ a_files.py:590 in from_local_or_remote │ │ │ │ 587 │ │ out = cls() │ │ 588 │ │ for key, patterns_for_key in patterns.items(): │ │ 589 │ │ │ out[key] = ( │ │ ❱ 590 │ │ │ │ DataFilesList.from_local_or_remote( │ │ 591 │ │ │ │ │ patterns_for_key, │ │ 592 │ │ │ │ │ base_path=base_path, │ │ 593 │ │ │ │ │ allowed_extensions=allowed_extensions, │ │ │ │ /x/datasets/dat │ │ a_files.py:558 in from_local_or_remote │ │ │ │ 555 │ │ use_auth_token: Optional[Union[bool, str]] = None, │ │ 556 │ ) -> "DataFilesList": │ │ 557 │ │ base_path = base_path if base_path is not None else str(Path() │ │ ❱ 558 │ │ data_files = resolve_patterns_locally_or_by_urls(base_path, pa │ │ 559 │ │ origin_metadata = _get_origin_metadata_locally_or_by_urls(data │ │ 560 │ │ return cls(data_files, origin_metadata) │ │ 561 │ │ │ │ /x/datasets/dat │ │ a_files.py:195 in resolve_patterns_locally_or_by_urls │ │ │ │ 192 │ │ if is_remote_url(pattern): │ │ 193 │ │ │ data_files.append(Url(pattern)) │ │ 194 │ │ else: │ │ ❱ 195 │ │ │ for path in _resolve_single_pattern_locally(base_path, pat │ │ 196 │ │ │ │ data_files.append(path) │ │ 197 │ │ │ 198 │ if not data_files: │ │ │ │ /x/datasets/dat │ │ a_files.py:145 in _resolve_single_pattern_locally │ │ │ │ 142 │ │ error_msg = f"Unable to find '{pattern}' at {Path(base_path).r │ │ 143 │ │ if allowed_extensions is not None: │ │ 144 │ │ │ error_msg += f" with any supported extension {list(allowed │ │ ❱ 145 │ │ raise FileNotFoundError(error_msg) │ │ 146 │ return sorted(out) │ │ 147 ### Steps to reproduce the bug 1. Version=2.3.2 2. In shell, cd workdir.(cd /a/b/c/.d/) 3. load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) ### Expected behavior fix it please~ ### Environment info 2.3.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6073/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6072/comments
https://api.github.com/repos/huggingface/datasets/issues/6072/events
https://github.com/huggingface/datasets/pull/6072
1,822,123,560
PR_kwDODunzps5WbWFN
6,072
Fix fsspec storage_options from load_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007617 / 0.011353 (-0.003736) | 0.004580 / 0.011008 (-0.006428) | 0.100913 / 0.038508 (0.062405) | 0.087703 / 0.023109 (0.064594) | 0.424159 / 0.275898 (0.148261) | 0.467195 / 0.323480 (0.143715) | 0.006890 / 0.007986 (-0.001096) | 0.003765 / 0.004328 (-0.000564) | 0.077513 / 0.004250 (0.073262) | 0.064889 / 0.037052 (0.027837) | 0.422349 / 0.258489 (0.163860) | 0.477391 / 0.293841 (0.183550) | 0.036025 / 0.128546 (-0.092522) | 0.009939 / 0.075646 (-0.065707) | 0.342409 / 0.419271 (-0.076862) | 0.061568 / 0.043533 (0.018035) | 0.431070 / 0.255139 (0.175931) | 0.462008 / 0.283200 (0.178809) | 0.027480 / 0.141683 (-0.114203) | 1.802271 / 1.452155 (0.350116) | 1.861336 / 1.492716 (0.368620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255806 / 0.018006 (0.237800) | 0.507969 / 0.000490 (0.507479) | 0.010060 / 0.000200 (0.009860) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032286 / 0.037411 (-0.005125) | 0.104468 / 0.014526 (0.089942) | 0.112707 / 0.176557 (-0.063850) | 0.181285 / 0.737135 (-0.555850) | 0.113180 / 0.296338 (-0.183158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449265 / 0.215209 (0.234056) | 4.465941 / 2.077655 (2.388287) | 2.177889 / 1.504120 (0.673769) | 1.969864 / 1.541195 (0.428669) | 2.077502 / 1.468490 (0.609011) | 0.561607 / 4.584777 (-4.023170) | 4.281873 / 3.745712 (0.536161) | 4.975352 / 5.269862 (-0.294510) | 2.907121 / 4.565676 (-1.658555) | 0.070205 / 0.424275 (-0.354070) | 0.009164 / 0.007607 (0.001557) | 0.581921 / 0.226044 (0.355876) | 5.538667 / 2.268929 (3.269739) | 2.798853 / 55.444624 (-52.645771) | 2.314015 / 6.876477 (-4.562462) | 2.584836 / 2.142072 (0.442763) | 0.672333 / 4.805227 (-4.132894) | 0.153828 / 6.500664 (-6.346836) | 0.069757 / 0.075469 (-0.005712) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559670 / 1.841788 (-0.282118) | 23.994639 / 8.074308 (15.920331) | 16.856160 / 10.191392 (6.664768) | 0.195555 / 0.680424 (-0.484869) | 0.021586 / 0.534201 (-0.512615) | 0.469295 / 0.579283 (-0.109989) | 0.481582 / 0.434364 (0.047218) | 0.588667 / 0.540337 (0.048329) | 0.734347 / 1.386936 (-0.652589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009614 / 0.011353 (-0.001739) | 0.004616 / 0.011008 (-0.006392) | 0.077223 / 0.038508 (0.038715) | 0.103074 / 0.023109 (0.079965) | 0.447834 / 0.275898 (0.171936) | 0.524696 / 0.323480 (0.201216) | 0.007120 / 0.007986 (-0.000866) | 0.003890 / 0.004328 (-0.000438) | 0.076406 / 0.004250 (0.072156) | 0.073488 / 0.037052 (0.036436) | 0.466221 / 0.258489 (0.207732) | 0.532206 / 0.293841 (0.238365) | 0.037596 / 0.128546 (-0.090950) | 0.010029 / 0.075646 (-0.065617) | 0.084313 / 0.419271 (-0.334959) | 0.060088 / 0.043533 (0.016555) | 0.437792 / 0.255139 (0.182653) | 0.512850 / 0.283200 (0.229650) | 0.032424 / 0.141683 (-0.109259) | 1.762130 / 1.452155 (0.309975) | 1.946097 / 1.492716 (0.453381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250774 / 0.018006 (0.232768) | 0.506869 / 0.000490 (0.506379) | 0.008232 / 0.000200 (0.008032) | 0.000164 / 0.000054 (0.000110) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037779 / 0.037411 (0.000368) | 0.111933 / 0.014526 (0.097407) | 0.122385 / 0.176557 (-0.054172) | 0.190372 / 0.737135 (-0.546763) | 0.122472 / 0.296338 (-0.173866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488502 / 0.215209 (0.273293) | 4.878114 / 2.077655 (2.800459) | 2.504144 / 1.504120 (1.000024) | 2.321077 / 1.541195 (0.779883) | 2.416797 / 1.468490 (0.948307) | 0.583582 / 4.584777 (-4.001195) | 4.277896 / 3.745712 (0.532184) | 3.874780 / 5.269862 (-1.395082) | 2.540099 / 4.565676 (-2.025577) | 0.068734 / 0.424275 (-0.355541) | 0.009158 / 0.007607 (0.001550) | 0.578401 / 0.226044 (0.352357) | 5.763354 / 2.268929 (3.494426) | 3.167771 / 55.444624 (-52.276853) | 2.675220 / 6.876477 (-4.201257) | 2.920927 / 2.142072 (0.778855) | 0.673948 / 4.805227 (-4.131280) | 0.157908 / 6.500664 (-6.342756) | 0.071672 / 0.075469 (-0.003797) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.635120 / 1.841788 (-0.206668) | 24.853480 / 8.074308 (16.779172) | 17.162978 / 10.191392 (6.971586) | 0.209577 / 0.680424 (-0.470847) | 0.030110 / 0.534201 (-0.504091) | 0.546970 / 0.579283 (-0.032313) | 0.581912 / 0.434364 (0.147548) | 0.571460 / 0.540337 (0.031123) | 0.823411 / 1.386936 (-0.563525) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#83b792dddd074ccd007c407f942f6870aac7ee84 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006674 / 0.011353 (-0.004679) | 0.004198 / 0.011008 (-0.006810) | 0.084859 / 0.038508 (0.046351) | 0.076065 / 0.023109 (0.052955) | 0.316065 / 0.275898 (0.040167) | 0.352097 / 0.323480 (0.028617) | 0.005610 / 0.007986 (-0.002376) | 0.003600 / 0.004328 (-0.000729) | 0.064921 / 0.004250 (0.060671) | 0.054493 / 0.037052 (0.017441) | 0.318125 / 0.258489 (0.059636) | 0.370183 / 0.293841 (0.076342) | 0.031141 / 0.128546 (-0.097405) | 0.008755 / 0.075646 (-0.066891) | 0.288241 / 0.419271 (-0.131030) | 0.052379 / 0.043533 (0.008846) | 0.328147 / 0.255139 (0.073008) | 0.347548 / 0.283200 (0.064348) | 0.024393 / 0.141683 (-0.117290) | 1.480646 / 1.452155 (0.028492) | 1.575867 / 1.492716 (0.083151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268978 / 0.018006 (0.250971) | 0.586470 / 0.000490 (0.585980) | 0.003190 / 0.000200 (0.002990) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030595 / 0.037411 (-0.006816) | 0.083037 / 0.014526 (0.068511) | 0.103706 / 0.176557 (-0.072850) | 0.164104 / 0.737135 (-0.573031) | 0.104536 / 0.296338 (-0.191802) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382274 / 0.215209 (0.167065) | 3.811878 / 2.077655 (1.734223) | 1.840098 / 1.504120 (0.335978) | 1.670949 / 1.541195 (0.129754) | 1.763755 / 1.468490 (0.295264) | 0.479526 / 4.584777 (-4.105251) | 3.544443 / 3.745712 (-0.201269) | 3.263004 / 5.269862 (-2.006858) | 2.092801 / 4.565676 (-2.472875) | 0.057167 / 0.424275 (-0.367108) | 0.007450 / 0.007607 (-0.000157) | 0.463731 / 0.226044 (0.237686) | 4.624630 / 2.268929 (2.355701) | 2.327078 / 55.444624 (-53.117546) | 1.977734 / 6.876477 (-4.898743) | 2.237152 / 2.142072 (0.095079) | 0.573210 / 4.805227 (-4.232018) | 0.132095 / 6.500664 (-6.368569) | 0.060283 / 0.075469 (-0.015186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243404 / 1.841788 (-0.598384) | 20.306778 / 8.074308 (12.232470) | 14.561660 / 10.191392 (4.370268) | 0.170826 / 0.680424 (-0.509598) | 0.018574 / 0.534201 (-0.515627) | 0.392367 / 0.579283 (-0.186916) | 0.402918 / 0.434364 (-0.031446) | 0.476629 / 0.540337 (-0.063708) | 0.653709 / 1.386936 (-0.733227) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006562 / 0.011353 (-0.004791) | 0.004092 / 0.011008 (-0.006916) | 0.065951 / 0.038508 (0.027443) | 0.078090 / 0.023109 (0.054981) | 0.369679 / 0.275898 (0.093781) | 0.411442 / 0.323480 (0.087962) | 0.005646 / 0.007986 (-0.002339) | 0.003537 / 0.004328 (-0.000791) | 0.066024 / 0.004250 (0.061773) | 0.058947 / 0.037052 (0.021895) | 0.389219 / 0.258489 (0.130730) | 0.414200 / 0.293841 (0.120359) | 0.030372 / 0.128546 (-0.098174) | 0.008631 / 0.075646 (-0.067015) | 0.071692 / 0.419271 (-0.347580) | 0.048035 / 0.043533 (0.004502) | 0.376960 / 0.255139 (0.121821) | 0.389847 / 0.283200 (0.106648) | 0.023940 / 0.141683 (-0.117743) | 1.487633 / 1.452155 (0.035479) | 1.561680 / 1.492716 (0.068964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301467 / 0.018006 (0.283461) | 0.544159 / 0.000490 (0.543669) | 0.000408 / 0.000200 (0.000208) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030939 / 0.037411 (-0.006472) | 0.087432 / 0.014526 (0.072906) | 0.103263 / 0.176557 (-0.073293) | 0.154551 / 0.737135 (-0.582585) | 0.104631 / 0.296338 (-0.191707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422348 / 0.215209 (0.207139) | 4.206003 / 2.077655 (2.128348) | 2.212619 / 1.504120 (0.708499) | 2.049616 / 1.541195 (0.508421) | 2.139093 / 1.468490 (0.670603) | 0.489647 / 4.584777 (-4.095130) | 3.523291 / 3.745712 (-0.222422) | 3.277657 / 5.269862 (-1.992205) | 2.111353 / 4.565676 (-2.454324) | 0.057597 / 0.424275 (-0.366679) | 0.007675 / 0.007607 (0.000068) | 0.493068 / 0.226044 (0.267023) | 4.939493 / 2.268929 (2.670565) | 2.695995 / 55.444624 (-52.748630) | 2.374904 / 6.876477 (-4.501573) | 2.600110 / 2.142072 (0.458038) | 0.586306 / 4.805227 (-4.218921) | 0.134137 / 6.500664 (-6.366527) | 0.061897 / 0.075469 (-0.013572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330628 / 1.841788 (-0.511160) | 20.557964 / 8.074308 (12.483656) | 14.251632 / 10.191392 (4.060240) | 0.148772 / 0.680424 (-0.531652) | 0.018383 / 0.534201 (-0.515817) | 0.392552 / 0.579283 (-0.186731) | 0.403959 / 0.434364 (-0.030405) | 0.462154 / 0.540337 (-0.078184) | 0.608832 / 1.386936 (-0.778104) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7a291b2b659a356199dff0ab004ad3845459034b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007659 / 0.011353 (-0.003694) | 0.004500 / 0.011008 (-0.006508) | 0.100379 / 0.038508 (0.061871) | 0.079731 / 0.023109 (0.056622) | 0.381788 / 0.275898 (0.105890) | 0.416524 / 0.323480 (0.093044) | 0.004446 / 0.007986 (-0.003539) | 0.003752 / 0.004328 (-0.000577) | 0.074956 / 0.004250 (0.070706) | 0.062885 / 0.037052 (0.025832) | 0.383849 / 0.258489 (0.125360) | 0.433906 / 0.293841 (0.140065) | 0.036079 / 0.128546 (-0.092468) | 0.009927 / 0.075646 (-0.065719) | 0.343879 / 0.419271 (-0.075393) | 0.061055 / 0.043533 (0.017523) | 0.376703 / 0.255139 (0.121564) | 0.428111 / 0.283200 (0.144911) | 0.028667 / 0.141683 (-0.113016) | 1.777755 / 1.452155 (0.325600) | 1.878283 / 1.492716 (0.385567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220829 / 0.018006 (0.202823) | 0.506406 / 0.000490 (0.505916) | 0.005550 / 0.000200 (0.005350) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034928 / 0.037411 (-0.002483) | 0.103873 / 0.014526 (0.089347) | 0.114352 / 0.176557 (-0.062204) | 0.188218 / 0.737135 (-0.548918) | 0.117343 / 0.296338 (-0.178995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459148 / 0.215209 (0.243939) | 4.582092 / 2.077655 (2.504437) | 2.275603 / 1.504120 (0.771483) | 2.058155 / 1.541195 (0.516960) | 2.163886 / 1.468490 (0.695396) | 0.573033 / 4.584777 (-4.011744) | 4.414891 / 3.745712 (0.669178) | 7.280433 / 5.269862 (2.010572) | 4.119414 / 4.565676 (-0.446262) | 0.067432 / 0.424275 (-0.356843) | 0.008687 / 0.007607 (0.001080) | 0.556029 / 0.226044 (0.329984) | 5.557192 / 2.268929 (3.288264) | 2.921596 / 55.444624 (-52.523028) | 2.520249 / 6.876477 (-4.356228) | 2.778965 / 2.142072 (0.636893) | 0.684765 / 4.805227 (-4.120462) | 0.159228 / 6.500664 (-6.341436) | 0.074015 / 0.075469 (-0.001454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.534470 / 1.841788 (-0.307318) | 23.630693 / 8.074308 (15.556385) | 17.058142 / 10.191392 (6.866750) | 0.200909 / 0.680424 (-0.479515) | 0.021637 / 0.534201 (-0.512564) | 0.467417 / 0.579283 (-0.111866) | 0.460456 / 0.434364 (0.026092) | 0.541131 / 0.540337 (0.000793) | 0.728560 / 1.386936 (-0.658376) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007625 / 0.011353 (-0.003727) | 0.004495 / 0.011008 (-0.006513) | 0.076373 / 0.038508 (0.037865) | 0.085260 / 0.023109 (0.062151) | 0.475778 / 0.275898 (0.199880) | 0.504604 / 0.323480 (0.181124) | 0.006733 / 0.007986 (-0.001253) | 0.003751 / 0.004328 (-0.000578) | 0.074993 / 0.004250 (0.070743) | 0.064704 / 0.037052 (0.027652) | 0.490072 / 0.258489 (0.231583) | 0.507560 / 0.293841 (0.213719) | 0.036765 / 0.128546 (-0.091781) | 0.009955 / 0.075646 (-0.065692) | 0.082452 / 0.419271 (-0.336820) | 0.057131 / 0.043533 (0.013598) | 0.467664 / 0.255139 (0.212525) | 0.482143 / 0.283200 (0.198943) | 0.025396 / 0.141683 (-0.116287) | 1.807587 / 1.452155 (0.355433) | 1.853355 / 1.492716 (0.360639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250543 / 0.018006 (0.232537) | 0.495685 / 0.000490 (0.495196) | 0.000415 / 0.000200 (0.000215) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035795 / 0.037411 (-0.001616) | 0.105954 / 0.014526 (0.091428) | 0.120158 / 0.176557 (-0.056399) | 0.181714 / 0.737135 (-0.555422) | 0.121242 / 0.296338 (-0.175097) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488241 / 0.215209 (0.273032) | 4.866916 / 2.077655 (2.789262) | 2.531530 / 1.504120 (1.027410) | 2.360642 / 1.541195 (0.819448) | 2.457320 / 1.468490 (0.988830) | 0.571224 / 4.584777 (-4.013553) | 4.339042 / 3.745712 (0.593330) | 3.672812 / 5.269862 (-1.597050) | 2.364535 / 4.565676 (-2.201142) | 0.067004 / 0.424275 (-0.357271) | 0.009019 / 0.007607 (0.001412) | 0.563751 / 0.226044 (0.337707) | 5.664917 / 2.268929 (3.395989) | 3.043316 / 55.444624 (-52.401308) | 2.682722 / 6.876477 (-4.193755) | 2.863482 / 2.142072 (0.721409) | 0.666171 / 4.805227 (-4.139056) | 0.151862 / 6.500664 (-6.348802) | 0.071199 / 0.075469 (-0.004271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601880 / 1.841788 (-0.239907) | 23.069073 / 8.074308 (14.994765) | 16.918377 / 10.191392 (6.726985) | 0.173614 / 0.680424 (-0.506810) | 0.021843 / 0.534201 (-0.512358) | 0.470531 / 0.579283 (-0.108753) | 0.471152 / 0.434364 (0.036788) | 0.550968 / 0.540337 (0.010631) | 0.718869 / 1.386936 (-0.668067) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9e6eea46fc9503765c125395e30e26c1ae2e084 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007530 / 0.011353 (-0.003823) | 0.004151 / 0.011008 (-0.006858) | 0.098490 / 0.038508 (0.059982) | 0.086955 / 0.023109 (0.063846) | 0.362133 / 0.275898 (0.086235) | 0.391402 / 0.323480 (0.067922) | 0.006274 / 0.007986 (-0.001712) | 0.003711 / 0.004328 (-0.000618) | 0.073519 / 0.004250 (0.069269) | 0.066170 / 0.037052 (0.029118) | 0.379057 / 0.258489 (0.120568) | 0.398132 / 0.293841 (0.104291) | 0.033936 / 0.128546 (-0.094610) | 0.009977 / 0.075646 (-0.065670) | 0.323766 / 0.419271 (-0.095505) | 0.078615 / 0.043533 (0.035082) | 0.352403 / 0.255139 (0.097264) | 0.386607 / 0.283200 (0.103407) | 0.036579 / 0.141683 (-0.105103) | 1.691899 / 1.452155 (0.239745) | 1.819396 / 1.492716 (0.326680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216888 / 0.018006 (0.198882) | 0.465781 / 0.000490 (0.465291) | 0.006197 / 0.000200 (0.005997) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032870 / 0.037411 (-0.004542) | 0.096026 / 0.014526 (0.081500) | 0.111093 / 0.176557 (-0.065464) | 0.185982 / 0.737135 (-0.551154) | 0.106967 / 0.296338 (-0.189371) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441567 / 0.215209 (0.226358) | 4.353813 / 2.077655 (2.276158) | 2.176034 / 1.504120 (0.671914) | 1.969631 / 1.541195 (0.428437) | 2.048821 / 1.468490 (0.580330) | 0.549144 / 4.584777 (-4.035633) | 4.016166 / 3.745712 (0.270453) | 3.764249 / 5.269862 (-1.505613) | 2.293995 / 4.565676 (-2.271681) | 0.065227 / 0.424275 (-0.359048) | 0.008303 / 0.007607 (0.000695) | 0.513783 / 0.226044 (0.287738) | 5.247617 / 2.268929 (2.978689) | 2.782114 / 55.444624 (-52.662510) | 2.342776 / 6.876477 (-4.533701) | 2.621569 / 2.142072 (0.479497) | 0.679336 / 4.805227 (-4.125891) | 0.152061 / 6.500664 (-6.348603) | 0.070294 / 0.075469 (-0.005175) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.471778 / 1.841788 (-0.370010) | 22.714904 / 8.074308 (14.640596) | 15.607991 / 10.191392 (5.416599) | 0.172592 / 0.680424 (-0.507832) | 0.021799 / 0.534201 (-0.512402) | 0.462740 / 0.579283 (-0.116543) | 0.490885 / 0.434364 (0.056521) | 0.552997 / 0.540337 (0.012660) | 0.763784 / 1.386936 (-0.623152) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007466 / 0.011353 (-0.003886) | 0.004322 / 0.011008 (-0.006686) | 0.074331 / 0.038508 (0.035823) | 0.085315 / 0.023109 (0.062206) | 0.409284 / 0.275898 (0.133386) | 0.464584 / 0.323480 (0.141104) | 0.005651 / 0.007986 (-0.002335) | 0.003577 / 0.004328 (-0.000751) | 0.070250 / 0.004250 (0.066000) | 0.059780 / 0.037052 (0.022727) | 0.419668 / 0.258489 (0.161179) | 0.462984 / 0.293841 (0.169143) | 0.034159 / 0.128546 (-0.094387) | 0.008999 / 0.075646 (-0.066647) | 0.076302 / 0.419271 (-0.342969) | 0.052274 / 0.043533 (0.008741) | 0.425938 / 0.255139 (0.170799) | 0.430399 / 0.283200 (0.147200) | 0.025017 / 0.141683 (-0.116666) | 1.680697 / 1.452155 (0.228542) | 1.774677 / 1.492716 (0.281960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291514 / 0.018006 (0.273508) | 0.461175 / 0.000490 (0.460685) | 0.023061 / 0.000200 (0.022861) | 0.000120 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033950 / 0.037411 (-0.003462) | 0.100032 / 0.014526 (0.085506) | 0.118308 / 0.176557 (-0.058249) | 0.183601 / 0.737135 (-0.553535) | 0.116936 / 0.296338 (-0.179402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478779 / 0.215209 (0.263570) | 4.709505 / 2.077655 (2.631850) | 2.457442 / 1.504120 (0.953322) | 2.213737 / 1.541195 (0.672542) | 2.340642 / 1.468490 (0.872152) | 0.567187 / 4.584777 (-4.017590) | 3.923061 / 3.745712 (0.177349) | 3.752989 / 5.269862 (-1.516873) | 2.324028 / 4.565676 (-2.241649) | 0.064471 / 0.424275 (-0.359804) | 0.008845 / 0.007607 (0.001238) | 0.547447 / 0.226044 (0.321402) | 5.599435 / 2.268929 (3.330506) | 2.980547 / 55.444624 (-52.464077) | 2.754908 / 6.876477 (-4.121569) | 2.832978 / 2.142072 (0.690906) | 0.635059 / 4.805227 (-4.170168) | 0.153478 / 6.500664 (-6.347187) | 0.067146 / 0.075469 (-0.008323) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.555588 / 1.841788 (-0.286200) | 22.828906 / 8.074308 (14.754597) | 16.211008 / 10.191392 (6.019616) | 0.168009 / 0.680424 (-0.512415) | 0.021966 / 0.534201 (-0.512235) | 0.464872 / 0.579283 (-0.114411) | 0.460429 / 0.434364 (0.026065) | 0.530498 / 0.540337 (-0.009839) | 0.705020 / 1.386936 (-0.681916) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#deb9e703237c8310c5a6db04f54d54368e951edd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005964 / 0.011353 (-0.005389) | 0.003644 / 0.011008 (-0.007364) | 0.079607 / 0.038508 (0.041099) | 0.058387 / 0.023109 (0.035278) | 0.312226 / 0.275898 (0.036328) | 0.349206 / 0.323480 (0.025726) | 0.004715 / 0.007986 (-0.003271) | 0.002869 / 0.004328 (-0.001460) | 0.061668 / 0.004250 (0.057417) | 0.045694 / 0.037052 (0.008642) | 0.313516 / 0.258489 (0.055027) | 0.357543 / 0.293841 (0.063702) | 0.027179 / 0.128546 (-0.101367) | 0.007961 / 0.075646 (-0.067686) | 0.262473 / 0.419271 (-0.156798) | 0.045588 / 0.043533 (0.002055) | 0.313102 / 0.255139 (0.057963) | 0.368686 / 0.283200 (0.085486) | 0.020556 / 0.141683 (-0.121127) | 1.447258 / 1.452155 (-0.004897) | 1.527319 / 1.492716 (0.034602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199417 / 0.018006 (0.181411) | 0.422155 / 0.000490 (0.421665) | 0.004972 / 0.000200 (0.004772) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023539 / 0.037411 (-0.013872) | 0.073055 / 0.014526 (0.058529) | 0.083631 / 0.176557 (-0.092926) | 0.145923 / 0.737135 (-0.591212) | 0.083820 / 0.296338 (-0.212518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396305 / 0.215209 (0.181096) | 3.967065 / 2.077655 (1.889410) | 2.101109 / 1.504120 (0.596989) | 1.958817 / 1.541195 (0.417622) | 2.037894 / 1.468490 (0.569404) | 0.496955 / 4.584777 (-4.087822) | 3.078948 / 3.745712 (-0.666764) | 3.363655 / 5.269862 (-1.906207) | 2.087659 / 4.565676 (-2.478018) | 0.057171 / 0.424275 (-0.367104) | 0.006410 / 0.007607 (-0.001197) | 0.470535 / 0.226044 (0.244491) | 4.715259 / 2.268929 (2.446330) | 2.355510 / 55.444624 (-53.089114) | 2.025270 / 6.876477 (-4.851207) | 2.210401 / 2.142072 (0.068329) | 0.580538 / 4.805227 (-4.224689) | 0.125068 / 6.500664 (-6.375596) | 0.059871 / 0.075469 (-0.015598) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245468 / 1.841788 (-0.596320) | 18.322042 / 8.074308 (10.247734) | 13.609726 / 10.191392 (3.418334) | 0.143623 / 0.680424 (-0.536801) | 0.017068 / 0.534201 (-0.517133) | 0.330758 / 0.579283 (-0.248525) | 0.339946 / 0.434364 (-0.094418) | 0.377861 / 0.540337 (-0.162476) | 0.524593 / 1.386936 (-0.862343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006049 / 0.011353 (-0.005304) | 0.003737 / 0.011008 (-0.007271) | 0.062816 / 0.038508 (0.024308) | 0.063768 / 0.023109 (0.040658) | 0.362001 / 0.275898 (0.086103) | 0.395251 / 0.323480 (0.071772) | 0.004823 / 0.007986 (-0.003163) | 0.002881 / 0.004328 (-0.001447) | 0.061987 / 0.004250 (0.057737) | 0.049950 / 0.037052 (0.012898) | 0.362442 / 0.258489 (0.103953) | 0.399321 / 0.293841 (0.105480) | 0.027616 / 0.128546 (-0.100930) | 0.007965 / 0.075646 (-0.067681) | 0.068584 / 0.419271 (-0.350687) | 0.044700 / 0.043533 (0.001168) | 0.361011 / 0.255139 (0.105872) | 0.386007 / 0.283200 (0.102807) | 0.024621 / 0.141683 (-0.117061) | 1.441497 / 1.452155 (-0.010657) | 1.533145 / 1.492716 (0.040429) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223446 / 0.018006 (0.205440) | 0.411147 / 0.000490 (0.410657) | 0.001821 / 0.000200 (0.001621) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025661 / 0.037411 (-0.011751) | 0.077838 / 0.014526 (0.063313) | 0.086148 / 0.176557 (-0.090408) | 0.140386 / 0.737135 (-0.596750) | 0.088793 / 0.296338 (-0.207546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425209 / 0.215209 (0.210000) | 4.250723 / 2.077655 (2.173068) | 2.403437 / 1.504120 (0.899317) | 2.283584 / 1.541195 (0.742390) | 2.326870 / 1.468490 (0.858380) | 0.504781 / 4.584777 (-4.079996) | 3.017042 / 3.745712 (-0.728670) | 4.643068 / 5.269862 (-0.626794) | 2.535710 / 4.565676 (-2.029967) | 0.058520 / 0.424275 (-0.365755) | 0.006766 / 0.007607 (-0.000841) | 0.500664 / 0.226044 (0.274620) | 5.017073 / 2.268929 (2.748145) | 2.668661 / 55.444624 (-52.775963) | 2.335486 / 6.876477 (-4.540991) | 2.486518 / 2.142072 (0.344445) | 0.598795 / 4.805227 (-4.206432) | 0.126395 / 6.500664 (-6.374269) | 0.063154 / 0.075469 (-0.012315) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.358059 / 1.841788 (-0.483728) | 18.615724 / 8.074308 (10.541416) | 13.670934 / 10.191392 (3.479542) | 0.134650 / 0.680424 (-0.545774) | 0.016941 / 0.534201 (-0.517260) | 0.335215 / 0.579283 (-0.244068) | 0.356118 / 0.434364 (-0.078246) | 0.393109 / 0.540337 (-0.147228) | 0.534165 / 1.386936 (-0.852771) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#da7d3b557665f34e84cd151ffe9d80b45a19fe33 \"CML watermark\")\n" ]
2023-07-26T10:44:23
2023-07-27T12:51:51
2023-07-27T12:42:57
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6072", "html_url": "https://github.com/huggingface/datasets/pull/6072", "diff_url": "https://github.com/huggingface/datasets/pull/6072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6072.patch", "merged_at": "2023-07-27T12:42:57" }
close https://github.com/huggingface/datasets/issues/6071
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6072/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
https://api.github.com/repos/huggingface/datasets/issues/6071/events
https://github.com/huggingface/datasets/issues/6071
1,821,990,749
I_kwDODunzps5smV9d
6,071
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?", "Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a `fsspec.implementations.arrow.ArrowFSWrapper` [to make it](https://arrow.apache.org/docs/python/filesystems.html#using-arrow-filesystems-with-fsspec) `fsspec` compatible). I also register it as an entrypoint with `fsspec` so that it's the one that gets automatically resolved when looking for filesystems for the `s3` protocol\r\n\r\nIn my case the `storage_option` that seemed not getting piped through was the filesystem's `endpoint_override` that I use in some tests to point at a mock S3 bucket" ]
2023-07-26T09:37:20
2023-07-27T12:42:58
2023-07-27T12:42:58
NONE
null
null
null
### Describe the bug Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set. I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()` ### Steps to reproduce the bug ```python import fsspec import pandas as pd import datasets # Generate mock parquet file data_files = "demo.parquet" pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files) _storage_options = {"x": 1, "y": 2} fs = fsspec.filesystem("file", **_storage_options) dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options ) ``` Looking at the `storage_options` resolved here: https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331 they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339 the call will fail if the user-provided `storage_options` were needed. --- A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly: ```python dataset = datasets.load_dataset( "parquet", data_files=data_files, storage_options=fs.storage_options, download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}), ) ``` ### Expected behavior `storage_options` provided to `load_dataset` take effect in all backend filesystem operations. ### Environment info datasets==2.14.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6070/comments
https://api.github.com/repos/huggingface/datasets/issues/6070/events
https://github.com/huggingface/datasets/pull/6070
1,820,836,330
PR_kwDODunzps5WXDLc
6,070
Fix Quickstart notebook link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008473 / 0.011353 (-0.002880) | 0.004734 / 0.011008 (-0.006274) | 0.103895 / 0.038508 (0.065387) | 0.071838 / 0.023109 (0.048729) | 0.379949 / 0.275898 (0.104051) | 0.397375 / 0.323480 (0.073895) | 0.006695 / 0.007986 (-0.001290) | 0.004536 / 0.004328 (0.000207) | 0.076151 / 0.004250 (0.071901) | 0.058690 / 0.037052 (0.021638) | 0.379937 / 0.258489 (0.121448) | 0.411833 / 0.293841 (0.117992) | 0.046805 / 0.128546 (-0.081741) | 0.013689 / 0.075646 (-0.061958) | 0.327896 / 0.419271 (-0.091375) | 0.063873 / 0.043533 (0.020340) | 0.378451 / 0.255139 (0.123312) | 0.398725 / 0.283200 (0.115525) | 0.034961 / 0.141683 (-0.106722) | 1.604999 / 1.452155 (0.152845) | 1.748370 / 1.492716 (0.255654) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224634 / 0.018006 (0.206628) | 0.548468 / 0.000490 (0.547979) | 0.005049 / 0.000200 (0.004849) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.092184 / 0.014526 (0.077659) | 0.102987 / 0.176557 (-0.073570) | 0.176987 / 0.737135 (-0.560149) | 0.103093 / 0.296338 (-0.193246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578410 / 0.215209 (0.363201) | 5.664781 / 2.077655 (3.587126) | 2.487763 / 1.504120 (0.983643) | 2.254213 / 1.541195 (0.713018) | 2.239693 / 1.468490 (0.771202) | 0.810380 / 4.584777 (-3.774397) | 5.036540 / 3.745712 (1.290828) | 7.064695 / 5.269862 (1.794834) | 4.215101 / 4.565676 (-0.350575) | 0.089792 / 0.424275 (-0.334483) | 0.008487 / 0.007607 (0.000879) | 0.692292 / 0.226044 (0.466248) | 6.780226 / 2.268929 (4.511297) | 3.245510 / 55.444624 (-52.199114) | 2.575984 / 6.876477 (-4.300493) | 2.747546 / 2.142072 (0.605473) | 0.956604 / 4.805227 (-3.848623) | 0.198937 / 6.500664 (-6.301727) | 0.070849 / 0.075469 (-0.004620) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.536469 / 1.841788 (-0.305319) | 21.750583 / 8.074308 (13.676275) | 20.559532 / 10.191392 (10.368140) | 0.241244 / 0.680424 (-0.439180) | 0.030078 / 0.534201 (-0.504123) | 0.462204 / 0.579283 (-0.117079) | 0.600103 / 0.434364 (0.165739) | 0.535074 / 0.540337 (-0.005264) | 0.764427 / 1.386936 (-0.622509) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009712 / 0.011353 (-0.001641) | 0.005036 / 0.011008 (-0.005972) | 0.073683 / 0.038508 (0.035175) | 0.078684 / 0.023109 (0.055574) | 0.445096 / 0.275898 (0.169198) | 0.496233 / 0.323480 (0.172754) | 0.006231 / 0.007986 (-0.001755) | 0.004720 / 0.004328 (0.000392) | 0.076444 / 0.004250 (0.072194) | 0.060932 / 0.037052 (0.023880) | 0.505727 / 0.258489 (0.247238) | 0.498702 / 0.293841 (0.204861) | 0.047115 / 0.128546 (-0.081431) | 0.014028 / 0.075646 (-0.061618) | 0.099292 / 0.419271 (-0.319980) | 0.061571 / 0.043533 (0.018038) | 0.468435 / 0.255139 (0.213296) | 0.481747 / 0.283200 (0.198547) | 0.033962 / 0.141683 (-0.107721) | 1.665397 / 1.452155 (0.213242) | 1.830488 / 1.492716 (0.337772) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268217 / 0.018006 (0.250211) | 0.555123 / 0.000490 (0.554633) | 0.000451 / 0.000200 (0.000251) | 0.000156 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034262 / 0.037411 (-0.003150) | 0.107807 / 0.014526 (0.093281) | 0.115631 / 0.176557 (-0.060926) | 0.175914 / 0.737135 (-0.561221) | 0.118775 / 0.296338 (-0.177564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583260 / 0.215209 (0.368051) | 5.934976 / 2.077655 (3.857321) | 2.752304 / 1.504120 (1.248184) | 2.382746 / 1.541195 (0.841551) | 2.389402 / 1.468490 (0.920912) | 0.794213 / 4.584777 (-3.790564) | 5.215269 / 3.745712 (1.469557) | 7.083595 / 5.269862 (1.813733) | 3.776136 / 4.565676 (-0.789540) | 0.091141 / 0.424275 (-0.333135) | 0.008803 / 0.007607 (0.001196) | 0.726510 / 0.226044 (0.500465) | 6.926860 / 2.268929 (4.657931) | 3.475612 / 55.444624 (-51.969012) | 2.730237 / 6.876477 (-4.146240) | 2.879145 / 2.142072 (0.737073) | 0.959956 / 4.805227 (-3.845271) | 0.189812 / 6.500664 (-6.310852) | 0.071624 / 0.075469 (-0.003845) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748184 / 1.841788 (-0.093603) | 23.764520 / 8.074308 (15.690212) | 19.502461 / 10.191392 (9.311069) | 0.233987 / 0.680424 (-0.446437) | 0.028116 / 0.534201 (-0.506085) | 0.478838 / 0.579283 (-0.100445) | 0.560952 / 0.434364 (0.126588) | 0.529902 / 0.540337 (-0.010435) | 0.735095 / 1.386936 (-0.651841) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dda3e389212f44117a40b44bb0cdf358cfd9f71e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006735 / 0.011353 (-0.004618) | 0.004131 / 0.011008 (-0.006878) | 0.085619 / 0.038508 (0.047111) | 0.076973 / 0.023109 (0.053864) | 0.315175 / 0.275898 (0.039277) | 0.354703 / 0.323480 (0.031223) | 0.005409 / 0.007986 (-0.002577) | 0.003438 / 0.004328 (-0.000891) | 0.064773 / 0.004250 (0.060523) | 0.056117 / 0.037052 (0.019064) | 0.313825 / 0.258489 (0.055336) | 0.354654 / 0.293841 (0.060813) | 0.031384 / 0.128546 (-0.097163) | 0.008537 / 0.075646 (-0.067109) | 0.288528 / 0.419271 (-0.130744) | 0.053036 / 0.043533 (0.009504) | 0.312213 / 0.255139 (0.057074) | 0.335952 / 0.283200 (0.052752) | 0.023165 / 0.141683 (-0.118518) | 1.497559 / 1.452155 (0.045404) | 1.561949 / 1.492716 (0.069233) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212558 / 0.018006 (0.194552) | 0.456555 / 0.000490 (0.456065) | 0.000334 / 0.000200 (0.000134) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028571 / 0.037411 (-0.008840) | 0.085154 / 0.014526 (0.070628) | 0.095961 / 0.176557 (-0.080596) | 0.153041 / 0.737135 (-0.584094) | 0.099234 / 0.296338 (-0.197105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381796 / 0.215209 (0.166587) | 3.806948 / 2.077655 (1.729294) | 1.829597 / 1.504120 (0.325477) | 1.659065 / 1.541195 (0.117870) | 1.738524 / 1.468490 (0.270034) | 0.483379 / 4.584777 (-4.101398) | 3.540648 / 3.745712 (-0.205064) | 3.269188 / 5.269862 (-2.000673) | 2.042113 / 4.565676 (-2.523564) | 0.056905 / 0.424275 (-0.367370) | 0.007235 / 0.007607 (-0.000373) | 0.460581 / 0.226044 (0.234537) | 4.597451 / 2.268929 (2.328522) | 2.334284 / 55.444624 (-53.110340) | 1.960026 / 6.876477 (-4.916450) | 2.172118 / 2.142072 (0.030045) | 0.576758 / 4.805227 (-4.228470) | 0.131196 / 6.500664 (-6.369468) | 0.060053 / 0.075469 (-0.015417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289466 / 1.841788 (-0.552322) | 19.713059 / 8.074308 (11.638750) | 14.292390 / 10.191392 (4.100998) | 0.146199 / 0.680424 (-0.534225) | 0.018123 / 0.534201 (-0.516078) | 0.392492 / 0.579283 (-0.186791) | 0.416544 / 0.434364 (-0.017820) | 0.457166 / 0.540337 (-0.083171) | 0.645490 / 1.386936 (-0.741446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006508 / 0.011353 (-0.004845) | 0.004010 / 0.011008 (-0.006998) | 0.065201 / 0.038508 (0.026693) | 0.076322 / 0.023109 (0.053213) | 0.364198 / 0.275898 (0.088300) | 0.398251 / 0.323480 (0.074771) | 0.005328 / 0.007986 (-0.002658) | 0.003298 / 0.004328 (-0.001031) | 0.064378 / 0.004250 (0.060128) | 0.056053 / 0.037052 (0.019000) | 0.365431 / 0.258489 (0.106942) | 0.402777 / 0.293841 (0.108936) | 0.031014 / 0.128546 (-0.097532) | 0.008507 / 0.075646 (-0.067140) | 0.071471 / 0.419271 (-0.347801) | 0.048300 / 0.043533 (0.004768) | 0.359700 / 0.255139 (0.104561) | 0.382244 / 0.283200 (0.099044) | 0.023783 / 0.141683 (-0.117900) | 1.517518 / 1.452155 (0.065363) | 1.569732 / 1.492716 (0.077015) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257447 / 0.018006 (0.239440) | 0.452598 / 0.000490 (0.452109) | 0.015187 / 0.000200 (0.014987) | 0.000164 / 0.000054 (0.000109) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030958 / 0.037411 (-0.006454) | 0.090066 / 0.014526 (0.075540) | 0.101120 / 0.176557 (-0.075437) | 0.154295 / 0.737135 (-0.582840) | 0.103582 / 0.296338 (-0.192756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415945 / 0.215209 (0.200736) | 4.146464 / 2.077655 (2.068809) | 2.121414 / 1.504120 (0.617294) | 1.956885 / 1.541195 (0.415690) | 2.047955 / 1.468490 (0.579465) | 0.486334 / 4.584777 (-4.098443) | 3.506263 / 3.745712 (-0.239449) | 4.942274 / 5.269862 (-0.327587) | 2.907836 / 4.565676 (-1.657841) | 0.057344 / 0.424275 (-0.366931) | 0.007813 / 0.007607 (0.000206) | 0.497888 / 0.226044 (0.271844) | 4.978017 / 2.268929 (2.709089) | 2.600447 / 55.444624 (-52.844177) | 2.335050 / 6.876477 (-4.541427) | 2.480373 / 2.142072 (0.338301) | 0.597954 / 4.805227 (-4.207274) | 0.134794 / 6.500664 (-6.365870) | 0.062605 / 0.075469 (-0.012864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.344390 / 1.841788 (-0.497398) | 20.020067 / 8.074308 (11.945759) | 14.344626 / 10.191392 (4.153234) | 0.172101 / 0.680424 (-0.508322) | 0.018549 / 0.534201 (-0.515652) | 0.393589 / 0.579283 (-0.185694) | 0.438401 / 0.434364 (0.004037) | 0.463800 / 0.540337 (-0.076537) | 0.618269 / 1.386936 (-0.768667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b0177910b32712f28d147879395e511207e39958 \"CML watermark\")\n" ]
2023-07-25T17:48:37
2023-07-25T18:19:01
2023-07-25T18:10:16
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6070", "html_url": "https://github.com/huggingface/datasets/pull/6070", "diff_url": "https://github.com/huggingface/datasets/pull/6070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6070.patch", "merged_at": "2023-07-25T18:10:16" }
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6070/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6069/comments
https://api.github.com/repos/huggingface/datasets/issues/6069/events
https://github.com/huggingface/datasets/issues/6069
1,820,831,535
I_kwDODunzps5sh68v
6,069
KeyError: dataset has no key "image"
{ "login": "etetteh", "id": 28512232, "node_id": "MDQ6VXNlcjI4NTEyMjMy", "avatar_url": "https://avatars.githubusercontent.com/u/28512232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/etetteh", "html_url": "https://github.com/etetteh", "followers_url": "https://api.github.com/users/etetteh/followers", "following_url": "https://api.github.com/users/etetteh/following{/other_user}", "gists_url": "https://api.github.com/users/etetteh/gists{/gist_id}", "starred_url": "https://api.github.com/users/etetteh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/etetteh/subscriptions", "organizations_url": "https://api.github.com/users/etetteh/orgs", "repos_url": "https://api.github.com/users/etetteh/repos", "events_url": "https://api.github.com/users/etetteh/events{/privacy}", "received_events_url": "https://api.github.com/users/etetteh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n", "This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef resize(examples):\r\n examples[\"pixel_values\"] = [image.convert(\"RGB\").resize((300, 300)) for image in examples[\"image\"]]\r\n return examples\r\n\r\ndef preprocess_train(example_batch):\r\n print(f\"Example batch: \\n{example_batch}\")\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"pixel_values\"]\r\n ]\r\n return example_batch\r\n\r\nimage_dataset = image_dataset.map(resize, remove_columns=[\"image\"], batched=True)\r\n\r\nimage_dataset[\"train\"].set_transform(preprocess_train)\r\nimage_dataset[\"validation\"].set_transform(preprocess_val)\r\n```\r\n\r\nWhen I print ds.column_names I get the following\r\n`{'train': ['image', 'label'], 'validation': ['image', 'label'], 'test': ['image', 'label']}`\r\n\r\nThe `print(f\"Example batch: \\n{example_batch}\")` in the `preprocess_train` function outputs only labels without images:\r\n```\r\nExample batch: \r\n{'label': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]}\r\n```\r\n\r\nThe weird part of it all is that a sample code runs in a jupyter lab notebook without any bugs, but when I run my scripts from the terminal I get the bug. The same code.", "The `remove_columns=[\"image\"]` argument in the `.map` call removes the `image` column from the output, so drop this argument to preserve it.", "The problem is not with the removal of the image key. The bug is why only the labels are sent to be process, instead of all the featues or dictionary keys.\r\n\r\nP.S. I just dropped the removal argument as you've suggested, but that didn't solve the problem, because only the labels are being sent to be processed", "All the `image_dataset.column_names` after the `map` call should also be present in `preprocess_train `/`preprocess_val` unless (input) `columns` in `set_transform` are specified.\r\n\r\nIf that's not the case, we need a full reproducer (not snippets) with the environment info.", "I have resolved the error after including a collate function as indicated in the Quick Start session of the Datasets docs.:\r\n\r\nHere is what I did:\r\n```\r\ndata_transforms = utils.get_data_augmentation(args)\r\nimage_dataset = utils.load_image_dataset(args.dataset)\r\n\r\ndef preprocess_train(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"train\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef preprocess_val(example_batch):\r\n example_batch[\"pixel_values\"] = [\r\n data_transforms[\"val\"](image.convert(\"RGB\")) for image in example_batch[\"image\"]\r\n ]\r\n return example_batch\r\n\r\ndef collate_fn(examples):\r\n images = []\r\n labels = []\r\n for example in examples:\r\n images.append((example[\"pixel_values\"]))\r\n labels.append(example[\"label\"])\r\n\r\n pixel_values = torch.stack(images)\r\n labels = torch.tensor(labels)\r\n return {\"pixel_values\": pixel_values, \"label\": labels}\r\n\r\ntrain_dataset = image_dataset[\"train\"].with_transform(preprocess_train)\r\nval_dataset = image_dataset[\"validation\"].with_transform(preprocess_val)\r\n\r\nimage_datasets = {\r\n \"train\": train_dataset,\r\n \"val\": val_dataset\r\n}\r\n\r\nsamplers = {\r\n \"train\": data.RandomSampler(train_dataset),\r\n \"val\": data.SequentialSampler(val_dataset),\r\n}\r\n\r\ndataloaders = {\r\n x: data.DataLoader(\r\n image_datasets[x],\r\n collate_fn=collate_fn,\r\n batch_size=batch_size,\r\n sampler=samplers[x],\r\n num_workers=args.num_workers,\r\n worker_init_fn=utils.set_seed_for_worker,\r\n generator=g,\r\n pin_memory=True,\r\n )\r\n for x in [\"train\", \"val\"]\r\n}\r\n\r\ntrain_loader, val_loader = dataloaders[\"train\"], dataloaders[\"val\"]\r\n```\r\nEverything runs fine without any bug now. " ]
2023-07-25T17:45:50
2023-07-27T12:42:17
2023-07-27T12:42:17
NONE
null
null
null
### Describe the bug I've loaded a local image dataset with: `ds = laod_dataset("imagefolder", data_dir=path-to-data)` And defined a transform to process the data, following the Datasets docs. However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch sent to the transformation function, it shows only the labels are being sent to the function. For some reason, the images are not in the example batches. ### Steps to reproduce the bug I'm using the latest stable version of datasets ### Expected behavior I expect the example_batches to contain both images and labels ### Environment info I'm using the latest stable version of datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6069/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6068/comments
https://api.github.com/repos/huggingface/datasets/issues/6068/events
https://github.com/huggingface/datasets/pull/6068
1,820,106,952
PR_kwDODunzps5WUkZi
6,068
fix tqdm lock deletion
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006573 / 0.011353 (-0.004780) | 0.004014 / 0.011008 (-0.006994) | 0.084999 / 0.038508 (0.046491) | 0.074965 / 0.023109 (0.051855) | 0.313294 / 0.275898 (0.037396) | 0.349678 / 0.323480 (0.026198) | 0.005401 / 0.007986 (-0.002585) | 0.003401 / 0.004328 (-0.000927) | 0.065363 / 0.004250 (0.061112) | 0.057159 / 0.037052 (0.020107) | 0.313260 / 0.258489 (0.054771) | 0.354654 / 0.293841 (0.060813) | 0.030895 / 0.128546 (-0.097651) | 0.008605 / 0.075646 (-0.067042) | 0.289190 / 0.419271 (-0.130081) | 0.052474 / 0.043533 (0.008942) | 0.316193 / 0.255139 (0.061054) | 0.339966 / 0.283200 (0.056767) | 0.024112 / 0.141683 (-0.117571) | 1.515606 / 1.452155 (0.063452) | 1.571428 / 1.492716 (0.078711) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203284 / 0.018006 (0.185278) | 0.452720 / 0.000490 (0.452230) | 0.003891 / 0.000200 (0.003691) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028992 / 0.037411 (-0.008419) | 0.083170 / 0.014526 (0.068644) | 0.097739 / 0.176557 (-0.078817) | 0.153401 / 0.737135 (-0.583734) | 0.098628 / 0.296338 (-0.197711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390190 / 0.215209 (0.174981) | 3.901272 / 2.077655 (1.823617) | 1.887194 / 1.504120 (0.383074) | 1.723696 / 1.541195 (0.182501) | 1.800537 / 1.468490 (0.332047) | 0.481758 / 4.584777 (-4.103019) | 3.605098 / 3.745712 (-0.140614) | 3.304482 / 5.269862 (-1.965380) | 2.053515 / 4.565676 (-2.512161) | 0.056997 / 0.424275 (-0.367278) | 0.007347 / 0.007607 (-0.000260) | 0.461367 / 0.226044 (0.235323) | 4.606152 / 2.268929 (2.337223) | 2.470048 / 55.444624 (-52.974576) | 2.060019 / 6.876477 (-4.816458) | 2.320507 / 2.142072 (0.178435) | 0.575050 / 4.805227 (-4.230178) | 0.133030 / 6.500664 (-6.367634) | 0.061508 / 0.075469 (-0.013962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275430 / 1.841788 (-0.566357) | 19.725453 / 8.074308 (11.651145) | 14.396360 / 10.191392 (4.204968) | 0.157980 / 0.680424 (-0.522443) | 0.018516 / 0.534201 (-0.515685) | 0.394717 / 0.579283 (-0.184566) | 0.404948 / 0.434364 (-0.029415) | 0.474001 / 0.540337 (-0.066336) | 0.668463 / 1.386936 (-0.718474) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006697 / 0.011353 (-0.004656) | 0.004206 / 0.011008 (-0.006802) | 0.065458 / 0.038508 (0.026950) | 0.075845 / 0.023109 (0.052735) | 0.365051 / 0.275898 (0.089153) | 0.400919 / 0.323480 (0.077439) | 0.005347 / 0.007986 (-0.002638) | 0.003386 / 0.004328 (-0.000943) | 0.065398 / 0.004250 (0.061148) | 0.057320 / 0.037052 (0.020268) | 0.379161 / 0.258489 (0.120672) | 0.406892 / 0.293841 (0.113051) | 0.031986 / 0.128546 (-0.096560) | 0.008674 / 0.075646 (-0.066972) | 0.071723 / 0.419271 (-0.347549) | 0.049897 / 0.043533 (0.006364) | 0.372034 / 0.255139 (0.116895) | 0.394293 / 0.283200 (0.111094) | 0.023681 / 0.141683 (-0.118002) | 1.479793 / 1.452155 (0.027639) | 1.553105 / 1.492716 (0.060389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233660 / 0.018006 (0.215654) | 0.454412 / 0.000490 (0.453923) | 0.004473 / 0.000200 (0.004273) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031115 / 0.037411 (-0.006296) | 0.090541 / 0.014526 (0.076015) | 0.104363 / 0.176557 (-0.072193) | 0.161022 / 0.737135 (-0.576114) | 0.105114 / 0.296338 (-0.191225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427668 / 0.215209 (0.212459) | 4.263145 / 2.077655 (2.185490) | 2.247043 / 1.504120 (0.742923) | 2.082554 / 1.541195 (0.541360) | 2.170505 / 1.468490 (0.702015) | 0.491802 / 4.584777 (-4.092975) | 3.587295 / 3.745712 (-0.158417) | 3.344697 / 5.269862 (-1.925165) | 2.060529 / 4.565676 (-2.505148) | 0.057829 / 0.424275 (-0.366446) | 0.007780 / 0.007607 (0.000173) | 0.503374 / 0.226044 (0.277330) | 5.034742 / 2.268929 (2.765814) | 2.701957 / 55.444624 (-52.742667) | 2.479002 / 6.876477 (-4.397474) | 2.622055 / 2.142072 (0.479982) | 0.591363 / 4.805227 (-4.213864) | 0.133834 / 6.500664 (-6.366830) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.338788 / 1.841788 (-0.503000) | 20.333599 / 8.074308 (12.259291) | 14.783196 / 10.191392 (4.591804) | 0.168695 / 0.680424 (-0.511729) | 0.018478 / 0.534201 (-0.515723) | 0.397398 / 0.579283 (-0.181885) | 0.409900 / 0.434364 (-0.024464) | 0.475315 / 0.540337 (-0.065023) | 0.644267 / 1.386936 (-0.742669) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb0b324e0bae4c93bb5509b2f0731bc346adb21b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007315 / 0.011353 (-0.004038) | 0.004294 / 0.011008 (-0.006714) | 0.100300 / 0.038508 (0.061792) | 0.077780 / 0.023109 (0.054670) | 0.353728 / 0.275898 (0.077830) | 0.400538 / 0.323480 (0.077058) | 0.005807 / 0.007986 (-0.002178) | 0.003649 / 0.004328 (-0.000680) | 0.077548 / 0.004250 (0.073297) | 0.058834 / 0.037052 (0.021781) | 0.352064 / 0.258489 (0.093574) | 0.399951 / 0.293841 (0.106110) | 0.036472 / 0.128546 (-0.092074) | 0.008653 / 0.075646 (-0.066994) | 0.323089 / 0.419271 (-0.096182) | 0.075127 / 0.043533 (0.031594) | 0.334412 / 0.255139 (0.079273) | 0.375718 / 0.283200 (0.092519) | 0.027915 / 0.141683 (-0.113768) | 1.698795 / 1.452155 (0.246640) | 1.781447 / 1.492716 (0.288730) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216111 / 0.018006 (0.198104) | 0.507706 / 0.000490 (0.507216) | 0.000851 / 0.000200 (0.000651) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030451 / 0.037411 (-0.006960) | 0.087488 / 0.014526 (0.072962) | 0.105094 / 0.176557 (-0.071462) | 0.168130 / 0.737135 (-0.569006) | 0.106791 / 0.296338 (-0.189547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426291 / 0.215209 (0.211082) | 4.281046 / 2.077655 (2.203391) | 2.162268 / 1.504120 (0.658148) | 1.909503 / 1.541195 (0.368309) | 1.943165 / 1.468490 (0.474675) | 0.516667 / 4.584777 (-4.068110) | 4.113218 / 3.745712 (0.367506) | 5.931372 / 5.269862 (0.661510) | 3.563521 / 4.565676 (-1.002155) | 0.062415 / 0.424275 (-0.361860) | 0.007577 / 0.007607 (-0.000030) | 0.534588 / 0.226044 (0.308543) | 5.183490 / 2.268929 (2.914561) | 2.790662 / 55.444624 (-52.653962) | 2.258630 / 6.876477 (-4.617846) | 2.499930 / 2.142072 (0.357857) | 0.606154 / 4.805227 (-4.199073) | 0.136093 / 6.500664 (-6.364571) | 0.061151 / 0.075469 (-0.014318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398392 / 1.841788 (-0.443396) | 21.482150 / 8.074308 (13.407842) | 15.477336 / 10.191392 (5.285944) | 0.192878 / 0.680424 (-0.487546) | 0.021764 / 0.534201 (-0.512437) | 0.437149 / 0.579283 (-0.142134) | 0.439976 / 0.434364 (0.005612) | 0.514498 / 0.540337 (-0.025840) | 0.762642 / 1.386936 (-0.624294) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007504 / 0.011353 (-0.003849) | 0.004526 / 0.011008 (-0.006482) | 0.071008 / 0.038508 (0.032500) | 0.078305 / 0.023109 (0.055195) | 0.436160 / 0.275898 (0.160262) | 0.439048 / 0.323480 (0.115568) | 0.006061 / 0.007986 (-0.001925) | 0.003681 / 0.004328 (-0.000648) | 0.069445 / 0.004250 (0.065195) | 0.059258 / 0.037052 (0.022206) | 0.437745 / 0.258489 (0.179256) | 0.464247 / 0.293841 (0.170406) | 0.033286 / 0.128546 (-0.095260) | 0.009846 / 0.075646 (-0.065800) | 0.076330 / 0.419271 (-0.342941) | 0.051919 / 0.043533 (0.008386) | 0.432817 / 0.255139 (0.177678) | 0.426295 / 0.283200 (0.143095) | 0.029818 / 0.141683 (-0.111865) | 1.747640 / 1.452155 (0.295485) | 1.726653 / 1.492716 (0.233937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251253 / 0.018006 (0.233247) | 0.483394 / 0.000490 (0.482904) | 0.003992 / 0.000200 (0.003793) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005231) | 0.095425 / 0.014526 (0.080900) | 0.105908 / 0.176557 (-0.070648) | 0.164732 / 0.737135 (-0.572403) | 0.115903 / 0.296338 (-0.180435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469467 / 0.215209 (0.254258) | 4.633239 / 2.077655 (2.555584) | 2.517557 / 1.504120 (1.013437) | 2.352726 / 1.541195 (0.811531) | 2.314618 / 1.468490 (0.846128) | 0.548446 / 4.584777 (-4.036331) | 3.908797 / 3.745712 (0.163085) | 3.525941 / 5.269862 (-1.743921) | 2.178858 / 4.565676 (-2.386819) | 0.057614 / 0.424275 (-0.366661) | 0.008604 / 0.007607 (0.000997) | 0.554756 / 0.226044 (0.328711) | 5.325635 / 2.268929 (3.056706) | 3.014266 / 55.444624 (-52.430359) | 2.844165 / 6.876477 (-4.032312) | 2.903019 / 2.142072 (0.760947) | 0.617750 / 4.805227 (-4.187478) | 0.144259 / 6.500664 (-6.356405) | 0.065944 / 0.075469 (-0.009525) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.504625 / 1.841788 (-0.337163) | 22.400787 / 8.074308 (14.326479) | 15.223702 / 10.191392 (5.032310) | 0.213357 / 0.680424 (-0.467067) | 0.019310 / 0.534201 (-0.514891) | 0.456596 / 0.579283 (-0.122687) | 0.473811 / 0.434364 (0.039447) | 0.517800 / 0.540337 (-0.022537) | 0.792468 / 1.386936 (-0.594468) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#03750f4a4c664125c7de910be004710b181dd354 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007420 / 0.011353 (-0.003933) | 0.004502 / 0.011008 (-0.006506) | 0.097882 / 0.038508 (0.059374) | 0.079084 / 0.023109 (0.055975) | 0.361797 / 0.275898 (0.085899) | 0.416563 / 0.323480 (0.093083) | 0.006106 / 0.007986 (-0.001879) | 0.003803 / 0.004328 (-0.000526) | 0.074669 / 0.004250 (0.070418) | 0.062168 / 0.037052 (0.025116) | 0.378844 / 0.258489 (0.120355) | 0.426601 / 0.293841 (0.132760) | 0.035619 / 0.128546 (-0.092927) | 0.009686 / 0.075646 (-0.065960) | 0.336481 / 0.419271 (-0.082790) | 0.065553 / 0.043533 (0.022021) | 0.362501 / 0.255139 (0.107362) | 0.399752 / 0.283200 (0.116552) | 0.028685 / 0.141683 (-0.112998) | 1.683495 / 1.452155 (0.231340) | 1.786105 / 1.492716 (0.293388) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220792 / 0.018006 (0.202786) | 0.501936 / 0.000490 (0.501447) | 0.000389 / 0.000200 (0.000189) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032180 / 0.037411 (-0.005232) | 0.093079 / 0.014526 (0.078553) | 0.107967 / 0.176557 (-0.068589) | 0.171747 / 0.737135 (-0.565389) | 0.107920 / 0.296338 (-0.188418) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444431 / 0.215209 (0.229222) | 4.454934 / 2.077655 (2.377279) | 2.140265 / 1.504120 (0.636145) | 1.960126 / 1.541195 (0.418931) | 2.049649 / 1.468490 (0.581158) | 0.557861 / 4.584777 (-4.026916) | 4.046240 / 3.745712 (0.300528) | 4.513748 / 5.269862 (-0.756114) | 2.593643 / 4.565676 (-1.972034) | 0.066795 / 0.424275 (-0.357480) | 0.008302 / 0.007607 (0.000694) | 0.535643 / 0.226044 (0.309599) | 5.299429 / 2.268929 (3.030500) | 2.656019 / 55.444624 (-52.788606) | 2.281214 / 6.876477 (-4.595263) | 2.302910 / 2.142072 (0.160837) | 0.661696 / 4.805227 (-4.143532) | 0.149787 / 6.500664 (-6.350877) | 0.069609 / 0.075469 (-0.005860) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.509842 / 1.841788 (-0.331946) | 21.717504 / 8.074308 (13.643196) | 15.825102 / 10.191392 (5.633710) | 0.168115 / 0.680424 (-0.512309) | 0.021637 / 0.534201 (-0.512564) | 0.454270 / 0.579283 (-0.125013) | 0.458531 / 0.434364 (0.024167) | 0.523052 / 0.540337 (-0.017285) | 0.711219 / 1.386936 (-0.675717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007189 / 0.011353 (-0.004164) | 0.004437 / 0.011008 (-0.006571) | 0.075111 / 0.038508 (0.036603) | 0.079245 / 0.023109 (0.056136) | 0.423169 / 0.275898 (0.147270) | 0.455007 / 0.323480 (0.131527) | 0.006076 / 0.007986 (-0.001909) | 0.003819 / 0.004328 (-0.000509) | 0.074976 / 0.004250 (0.070726) | 0.062127 / 0.037052 (0.025075) | 0.456809 / 0.258489 (0.198320) | 0.474707 / 0.293841 (0.180867) | 0.036221 / 0.128546 (-0.092325) | 0.009428 / 0.075646 (-0.066218) | 0.082842 / 0.419271 (-0.336429) | 0.057086 / 0.043533 (0.013553) | 0.436121 / 0.255139 (0.180982) | 0.453934 / 0.283200 (0.170734) | 0.026045 / 0.141683 (-0.115638) | 1.789782 / 1.452155 (0.337627) | 1.820934 / 1.492716 (0.328218) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230790 / 0.018006 (0.212784) | 0.497987 / 0.000490 (0.497497) | 0.002775 / 0.000200 (0.002575) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034418 / 0.037411 (-0.002994) | 0.105567 / 0.014526 (0.091041) | 0.113134 / 0.176557 (-0.063423) | 0.173742 / 0.737135 (-0.563394) | 0.115936 / 0.296338 (-0.180403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.502259 / 0.215209 (0.287050) | 4.969877 / 2.077655 (2.892222) | 2.684860 / 1.504120 (1.180740) | 2.484386 / 1.541195 (0.943192) | 2.543061 / 1.468490 (1.074571) | 0.545733 / 4.584777 (-4.039044) | 4.029660 / 3.745712 (0.283948) | 5.927883 / 5.269862 (0.658021) | 3.528372 / 4.565676 (-1.037305) | 0.065957 / 0.424275 (-0.358318) | 0.008933 / 0.007607 (0.001326) | 0.601630 / 0.226044 (0.375585) | 5.825872 / 2.268929 (3.556944) | 3.230721 / 55.444624 (-52.213904) | 2.891308 / 6.876477 (-3.985169) | 3.054994 / 2.142072 (0.912922) | 0.665480 / 4.805227 (-4.139747) | 0.154815 / 6.500664 (-6.345849) | 0.072997 / 0.075469 (-0.002472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.549892 / 1.841788 (-0.291896) | 22.337484 / 8.074308 (14.263176) | 16.308286 / 10.191392 (6.116894) | 0.189594 / 0.680424 (-0.490830) | 0.021844 / 0.534201 (-0.512357) | 0.456958 / 0.579283 (-0.122325) | 0.459957 / 0.434364 (0.025593) | 0.529014 / 0.540337 (-0.011323) | 0.700359 / 1.386936 (-0.686577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32e4df86b5fb0bc164433ce615af641ec3ba437e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009050 / 0.011353 (-0.002303) | 0.004968 / 0.011008 (-0.006040) | 0.114315 / 0.038508 (0.075807) | 0.084475 / 0.023109 (0.061366) | 0.426325 / 0.275898 (0.150427) | 0.457870 / 0.323480 (0.134390) | 0.007076 / 0.007986 (-0.000910) | 0.004635 / 0.004328 (0.000307) | 0.082950 / 0.004250 (0.078700) | 0.065414 / 0.037052 (0.028361) | 0.441936 / 0.258489 (0.183447) | 0.476983 / 0.293841 (0.183142) | 0.048575 / 0.128546 (-0.079972) | 0.013929 / 0.075646 (-0.061717) | 0.377498 / 0.419271 (-0.041774) | 0.081503 / 0.043533 (0.037970) | 0.426706 / 0.255139 (0.171567) | 0.460374 / 0.283200 (0.177175) | 0.046052 / 0.141683 (-0.095631) | 1.894896 / 1.452155 (0.442741) | 1.998639 / 1.492716 (0.505923) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313267 / 0.018006 (0.295261) | 0.607501 / 0.000490 (0.607012) | 0.003369 / 0.000200 (0.003169) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032266 / 0.037411 (-0.005145) | 0.120138 / 0.014526 (0.105613) | 0.115044 / 0.176557 (-0.061513) | 0.181374 / 0.737135 (-0.555761) | 0.114681 / 0.296338 (-0.181657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648039 / 0.215209 (0.432830) | 6.005048 / 2.077655 (3.927394) | 2.674524 / 1.504120 (1.170404) | 2.284831 / 1.541195 (0.743637) | 2.360150 / 1.468490 (0.891660) | 0.888021 / 4.584777 (-3.696756) | 5.419840 / 3.745712 (1.674128) | 4.825816 / 5.269862 (-0.444046) | 3.140876 / 4.565676 (-1.424801) | 0.099511 / 0.424275 (-0.324764) | 0.009176 / 0.007607 (0.001569) | 0.735646 / 0.226044 (0.509602) | 7.224026 / 2.268929 (4.955097) | 3.551146 / 55.444624 (-51.893478) | 2.844374 / 6.876477 (-4.032103) | 3.145307 / 2.142072 (1.003235) | 1.077636 / 4.805227 (-3.727591) | 0.217754 / 6.500664 (-6.282910) | 0.081755 / 0.075469 (0.006286) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670956 / 1.841788 (-0.170831) | 25.524961 / 8.074308 (17.450653) | 23.061596 / 10.191392 (12.870204) | 0.247524 / 0.680424 (-0.432899) | 0.031712 / 0.534201 (-0.502489) | 0.513049 / 0.579283 (-0.066234) | 0.614568 / 0.434364 (0.180204) | 0.574669 / 0.540337 (0.034331) | 0.816621 / 1.386936 (-0.570315) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009384 / 0.011353 (-0.001969) | 0.004959 / 0.011008 (-0.006049) | 0.084782 / 0.038508 (0.046274) | 0.098086 / 0.023109 (0.074977) | 0.544395 / 0.275898 (0.268497) | 0.585157 / 0.323480 (0.261677) | 0.006507 / 0.007986 (-0.001479) | 0.004151 / 0.004328 (-0.000178) | 0.088596 / 0.004250 (0.084345) | 0.069149 / 0.037052 (0.032097) | 0.533109 / 0.258489 (0.274620) | 0.604117 / 0.293841 (0.310276) | 0.047685 / 0.128546 (-0.080861) | 0.013651 / 0.075646 (-0.061996) | 0.096566 / 0.419271 (-0.322705) | 0.062022 / 0.043533 (0.018489) | 0.561897 / 0.255139 (0.306758) | 0.617636 / 0.283200 (0.334436) | 0.034636 / 0.141683 (-0.107047) | 1.854667 / 1.452155 (0.402512) | 1.908923 / 1.492716 (0.416207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260633 / 0.018006 (0.242627) | 0.622268 / 0.000490 (0.621778) | 0.002116 / 0.000200 (0.001916) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035161 / 0.037411 (-0.002250) | 0.103707 / 0.014526 (0.089181) | 0.115467 / 0.176557 (-0.061090) | 0.180077 / 0.737135 (-0.557059) | 0.118871 / 0.296338 (-0.177467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.628481 / 0.215209 (0.413271) | 6.304929 / 2.077655 (4.227275) | 3.027775 / 1.504120 (1.523655) | 2.753880 / 1.541195 (1.212686) | 2.820442 / 1.468490 (1.351952) | 0.851103 / 4.584777 (-3.733674) | 5.427383 / 3.745712 (1.681670) | 7.434310 / 5.269862 (2.164449) | 4.418790 / 4.565676 (-0.146887) | 0.101733 / 0.424275 (-0.322542) | 0.009701 / 0.007607 (0.002094) | 0.763033 / 0.226044 (0.536989) | 7.497927 / 2.268929 (5.228998) | 3.735335 / 55.444624 (-51.709290) | 3.149200 / 6.876477 (-3.727277) | 3.306214 / 2.142072 (1.164141) | 1.085440 / 4.805227 (-3.719787) | 0.207562 / 6.500664 (-6.293102) | 0.078091 / 0.075469 (0.002622) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.820097 / 1.841788 (-0.021691) | 25.525539 / 8.074308 (17.451231) | 21.874219 / 10.191392 (11.682827) | 0.228391 / 0.680424 (-0.452033) | 0.029584 / 0.534201 (-0.504617) | 0.511546 / 0.579283 (-0.067737) | 0.602719 / 0.434364 (0.168355) | 0.581874 / 0.540337 (0.041537) | 0.802861 / 1.386936 (-0.584075) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6063ea2069c8b5641b983ba2c1d39b60afe7c00a \"CML watermark\")\n" ]
2023-07-25T11:17:25
2023-07-25T15:29:39
2023-07-25T15:17:50
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6068", "html_url": "https://github.com/huggingface/datasets/pull/6068", "diff_url": "https://github.com/huggingface/datasets/pull/6068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6068.patch", "merged_at": "2023-07-25T15:17:50" }
related to https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6068/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6067/comments
https://api.github.com/repos/huggingface/datasets/issues/6067/events
https://github.com/huggingface/datasets/pull/6067
1,819,919,025
PR_kwDODunzps5WT7EQ
6,067
fix tqdm lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006578 / 0.011353 (-0.004775) | 0.003953 / 0.011008 (-0.007055) | 0.084417 / 0.038508 (0.045908) | 0.076729 / 0.023109 (0.053620) | 0.315369 / 0.275898 (0.039471) | 0.347012 / 0.323480 (0.023533) | 0.005299 / 0.007986 (-0.002686) | 0.003321 / 0.004328 (-0.001007) | 0.063954 / 0.004250 (0.059704) | 0.055810 / 0.037052 (0.018758) | 0.317651 / 0.258489 (0.059162) | 0.352603 / 0.293841 (0.058762) | 0.031355 / 0.128546 (-0.097192) | 0.008493 / 0.075646 (-0.067153) | 0.287295 / 0.419271 (-0.131977) | 0.052716 / 0.043533 (0.009183) | 0.316410 / 0.255139 (0.061271) | 0.328893 / 0.283200 (0.045693) | 0.024005 / 0.141683 (-0.117678) | 1.520333 / 1.452155 (0.068178) | 1.601268 / 1.492716 (0.108552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205144 / 0.018006 (0.187138) | 0.459160 / 0.000490 (0.458670) | 0.000321 / 0.000200 (0.000121) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027503 / 0.037411 (-0.009908) | 0.081476 / 0.014526 (0.066950) | 0.096759 / 0.176557 (-0.079798) | 0.157888 / 0.737135 (-0.579247) | 0.094592 / 0.296338 (-0.201746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384762 / 0.215209 (0.169553) | 3.843503 / 2.077655 (1.765849) | 1.921685 / 1.504120 (0.417565) | 1.752441 / 1.541195 (0.211246) | 1.822105 / 1.468490 (0.353615) | 0.480243 / 4.584777 (-4.104534) | 3.577220 / 3.745712 (-0.168492) | 5.047560 / 5.269862 (-0.222302) | 2.988008 / 4.565676 (-1.577669) | 0.056430 / 0.424275 (-0.367845) | 0.007180 / 0.007607 (-0.000427) | 0.458113 / 0.226044 (0.232069) | 4.584096 / 2.268929 (2.315168) | 2.395307 / 55.444624 (-53.049317) | 2.080530 / 6.876477 (-4.795947) | 2.239000 / 2.142072 (0.096927) | 0.575822 / 4.805227 (-4.229405) | 0.133303 / 6.500664 (-6.367361) | 0.059449 / 0.075469 (-0.016020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256496 / 1.841788 (-0.585291) | 19.651614 / 8.074308 (11.577306) | 14.232480 / 10.191392 (4.041088) | 0.146461 / 0.680424 (-0.533963) | 0.018632 / 0.534201 (-0.515569) | 0.399844 / 0.579283 (-0.179439) | 0.411225 / 0.434364 (-0.023139) | 0.458203 / 0.540337 (-0.082135) | 0.669916 / 1.386936 (-0.717020) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003898 / 0.011008 (-0.007110) | 0.064037 / 0.038508 (0.025529) | 0.071982 / 0.023109 (0.048873) | 0.361936 / 0.275898 (0.086038) | 0.393165 / 0.323480 (0.069685) | 0.005207 / 0.007986 (-0.002779) | 0.003231 / 0.004328 (-0.001098) | 0.064318 / 0.004250 (0.060068) | 0.055776 / 0.037052 (0.018724) | 0.383087 / 0.258489 (0.124598) | 0.402428 / 0.293841 (0.108587) | 0.031587 / 0.128546 (-0.096959) | 0.008527 / 0.075646 (-0.067119) | 0.070495 / 0.419271 (-0.348777) | 0.048806 / 0.043533 (0.005273) | 0.369932 / 0.255139 (0.114793) | 0.385268 / 0.283200 (0.102068) | 0.023183 / 0.141683 (-0.118500) | 1.491175 / 1.452155 (0.039020) | 1.534191 / 1.492716 (0.041475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224526 / 0.018006 (0.206520) | 0.445460 / 0.000490 (0.444970) | 0.003612 / 0.000200 (0.003412) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029829 / 0.037411 (-0.007583) | 0.087951 / 0.014526 (0.073425) | 0.100069 / 0.176557 (-0.076487) | 0.154944 / 0.737135 (-0.582192) | 0.101271 / 0.296338 (-0.195067) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412385 / 0.215209 (0.197175) | 4.108038 / 2.077655 (2.030384) | 2.163578 / 1.504120 (0.659459) | 2.031934 / 1.541195 (0.490740) | 2.155857 / 1.468490 (0.687367) | 0.481132 / 4.584777 (-4.103645) | 3.620868 / 3.745712 (-0.124844) | 5.222175 / 5.269862 (-0.047687) | 3.115637 / 4.565676 (-1.450039) | 0.056480 / 0.424275 (-0.367795) | 0.007761 / 0.007607 (0.000154) | 0.483553 / 0.226044 (0.257509) | 4.830087 / 2.268929 (2.561159) | 2.629919 / 55.444624 (-52.814705) | 2.327551 / 6.876477 (-4.548926) | 2.539934 / 2.142072 (0.397861) | 0.587963 / 4.805227 (-4.217265) | 0.131085 / 6.500664 (-6.369579) | 0.060807 / 0.075469 (-0.014662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350003 / 1.841788 (-0.491785) | 19.491713 / 8.074308 (11.417405) | 14.030429 / 10.191392 (3.839037) | 0.174762 / 0.680424 (-0.505662) | 0.018523 / 0.534201 (-0.515678) | 0.394946 / 0.579283 (-0.184337) | 0.407652 / 0.434364 (-0.026712) | 0.465806 / 0.540337 (-0.074531) | 0.605417 / 1.386936 (-0.781519) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cc85979df3a39657079fdf0844c7e64547507f1a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003675 / 0.011008 (-0.007333) | 0.080680 / 0.038508 (0.042171) | 0.064378 / 0.023109 (0.041268) | 0.394312 / 0.275898 (0.118414) | 0.428143 / 0.323480 (0.104663) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001429) | 0.062592 / 0.004250 (0.058342) | 0.050957 / 0.037052 (0.013904) | 0.396831 / 0.258489 (0.138342) | 0.438280 / 0.293841 (0.144439) | 0.027743 / 0.128546 (-0.100804) | 0.008068 / 0.075646 (-0.067578) | 0.262541 / 0.419271 (-0.156730) | 0.060837 / 0.043533 (0.017304) | 0.397941 / 0.255139 (0.142802) | 0.417012 / 0.283200 (0.133813) | 0.030153 / 0.141683 (-0.111530) | 1.477115 / 1.452155 (0.024960) | 1.516642 / 1.492716 (0.023926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178032 / 0.018006 (0.160026) | 0.445775 / 0.000490 (0.445286) | 0.004275 / 0.000200 (0.004075) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025025 / 0.037411 (-0.012386) | 0.074113 / 0.014526 (0.059587) | 0.083814 / 0.176557 (-0.092743) | 0.148860 / 0.737135 (-0.588275) | 0.085408 / 0.296338 (-0.210931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393714 / 0.215209 (0.178505) | 3.936589 / 2.077655 (1.858934) | 1.910501 / 1.504120 (0.406381) | 1.729670 / 1.541195 (0.188475) | 1.777647 / 1.468490 (0.309156) | 0.499532 / 4.584777 (-4.085245) | 3.002385 / 3.745712 (-0.743327) | 2.906916 / 5.269862 (-2.362945) | 1.883321 / 4.565676 (-2.682356) | 0.057546 / 0.424275 (-0.366730) | 0.006492 / 0.007607 (-0.001115) | 0.463605 / 0.226044 (0.237560) | 4.620215 / 2.268929 (2.351287) | 2.399021 / 55.444624 (-53.045603) | 2.182962 / 6.876477 (-4.693514) | 2.357344 / 2.142072 (0.215272) | 0.583946 / 4.805227 (-4.221282) | 0.124644 / 6.500664 (-6.376021) | 0.060831 / 0.075469 (-0.014638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276412 / 1.841788 (-0.565375) | 18.462522 / 8.074308 (10.388214) | 13.877375 / 10.191392 (3.685983) | 0.150584 / 0.680424 (-0.529840) | 0.016675 / 0.534201 (-0.517526) | 0.331711 / 0.579283 (-0.247573) | 0.366659 / 0.434364 (-0.067705) | 0.396400 / 0.540337 (-0.143938) | 0.555418 / 1.386936 (-0.831518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005995 / 0.011353 (-0.005358) | 0.003610 / 0.011008 (-0.007399) | 0.061802 / 0.038508 (0.023294) | 0.059265 / 0.023109 (0.036156) | 0.392628 / 0.275898 (0.116730) | 0.413143 / 0.323480 (0.089663) | 0.004687 / 0.007986 (-0.003299) | 0.002843 / 0.004328 (-0.001486) | 0.061932 / 0.004250 (0.057682) | 0.049466 / 0.037052 (0.012413) | 0.402718 / 0.258489 (0.144229) | 0.415039 / 0.293841 (0.121198) | 0.027352 / 0.128546 (-0.101194) | 0.007965 / 0.075646 (-0.067682) | 0.067456 / 0.419271 (-0.351815) | 0.042336 / 0.043533 (-0.001196) | 0.405543 / 0.255139 (0.150404) | 0.403209 / 0.283200 (0.120010) | 0.021459 / 0.141683 (-0.120224) | 1.442861 / 1.452155 (-0.009293) | 1.491213 / 1.492716 (-0.001503) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248225 / 0.018006 (0.230219) | 0.434174 / 0.000490 (0.433684) | 0.001973 / 0.000200 (0.001773) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.077865 / 0.014526 (0.063339) | 0.086980 / 0.176557 (-0.089577) | 0.143682 / 0.737135 (-0.593453) | 0.088634 / 0.296338 (-0.207705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417591 / 0.215209 (0.202382) | 4.168700 / 2.077655 (2.091045) | 2.335743 / 1.504120 (0.831623) | 2.208174 / 1.541195 (0.666980) | 2.256658 / 1.468490 (0.788168) | 0.503164 / 4.584777 (-4.081613) | 3.026667 / 3.745712 (-0.719045) | 4.496675 / 5.269862 (-0.773187) | 2.741049 / 4.565676 (-1.824628) | 0.057781 / 0.424275 (-0.366494) | 0.006810 / 0.007607 (-0.000797) | 0.490803 / 0.226044 (0.264759) | 4.914369 / 2.268929 (2.645441) | 2.594250 / 55.444624 (-52.850375) | 2.274552 / 6.876477 (-4.601925) | 2.397529 / 2.142072 (0.255456) | 0.593008 / 4.805227 (-4.212220) | 0.126194 / 6.500664 (-6.374470) | 0.062261 / 0.075469 (-0.013208) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.357561 / 1.841788 (-0.484227) | 18.622995 / 8.074308 (10.548687) | 14.142569 / 10.191392 (3.951177) | 0.146527 / 0.680424 (-0.533897) | 0.016863 / 0.534201 (-0.517338) | 0.336219 / 0.579283 (-0.243064) | 0.348650 / 0.434364 (-0.085714) | 0.385958 / 0.540337 (-0.154380) | 0.517958 / 1.386936 (-0.868978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f3da7a5a7d0d0415476ecebb0458e7c60df24445 \"CML watermark\")\n" ]
2023-07-25T09:32:16
2023-07-25T10:02:43
2023-07-25T09:54:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6067", "html_url": "https://github.com/huggingface/datasets/pull/6067", "diff_url": "https://github.com/huggingface/datasets/pull/6067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6067.patch", "merged_at": "2023-07-25T09:54:12" }
close https://github.com/huggingface/datasets/issues/6066
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6067/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6066/comments
https://api.github.com/repos/huggingface/datasets/issues/6066/events
https://github.com/huggingface/datasets/issues/6066
1,819,717,542
I_kwDODunzps5sdq-m
6,066
AttributeError: '_tqdm_cls' object has no attribute '_lock'
{ "login": "codingl2k1", "id": 138426806, "node_id": "U_kgDOCEA5tg", "avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codingl2k1", "html_url": "https://github.com/codingl2k1", "followers_url": "https://api.github.com/users/codingl2k1/followers", "following_url": "https://api.github.com/users/codingl2k1/following{/other_user}", "gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}", "starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions", "organizations_url": "https://api.github.com/users/codingl2k1/orgs", "repos_url": "https://api.github.com/users/codingl2k1/repos", "events_url": "https://api.github.com/users/codingl2k1/events{/privacy}", "received_events_url": "https://api.github.com/users/codingl2k1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime", "I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a multi threading bug:\r\n\r\n```python\r\n@contextmanager\r\ndef ensure_lock(tqdm_class, lock_name=\"\"):\r\n \"\"\"get (create if necessary) and then restore `tqdm_class`'s lock\"\"\"\r\n import os\r\n import threading\r\n print(os.getpid(), threading.get_ident(), \"ensure_lock\", tqdm_class, lock_name)\r\n old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock\r\n lock = old_lock or tqdm_class.get_lock() # maybe create a new lock\r\n lock = getattr(lock, lock_name, lock) # maybe subtype\r\n tqdm_class.set_lock(lock)\r\n print(os.getpid(), threading.get_ident(), \"set_lock\")\r\n yield lock\r\n if old_lock is None:\r\n print(os.getpid(), threading.get_ident(), \"del tqdm_class\")\r\n del tqdm_class._lock\r\n else:\r\n tqdm_class.set_lock(old_lock)\r\n```\r\noutput\r\n```\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 8424758784 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 8424758784 set_lock\r\n64943 8424758784 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11568967680 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11568967680 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11638370304 ensure_lock <datasets.utils.logging._tqdm_cls object at 0x2aa7fb250> \r\n64943 11638370304 set_lock\r\n64943 11638370304 del tqdm_class\r\n64943 11568967680 del tqdm_class\r\n```\r\n\r\nThread `11638370304` del the _lock from tqdm_class first, then thread `11568967680` del _lock failed.", "Maybe it is a bug of tqdm? I think simply use `try ... except AttributeError ...` wraps `del tqdm_class._lock` should work.", "Yes it looks like a bug on their end indeed, do you want to open a PR on tqdm ?\r\n\r\nLet me see if I can find a workaround in the meantime", "I opened https://github.com/huggingface/datasets/pull/6068 if you want to try it out", "> I opened #6068 if you want to try it out\r\n\r\nThis fix works! Thanks.", "Awesome ! closing this then :)\r\nWe'll do a patch release today or tomorrow" ]
2023-07-25T07:24:36
2023-07-26T10:56:25
2023-07-26T10:56:24
NONE
null
null
null
### Describe the bug ```python File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module data_files = DataFilesDict.from_patterns( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns DataFilesList.from_patterns( File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata return thread_map( ^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map with ensure_lock(tqdm_class, lock_name=lock_name) as lk: File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__ next(self.gen) File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock del tqdm_class._lock ^^^^^^^^^^^^^^^^ AttributeError: '_tqdm_cls' object has no attribute '_lock' ``` ### Steps to reproduce the bug Happens ocasionally. ### Expected behavior I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print. According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24 ```python @contextmanager def ensure_lock(tqdm_class, lock_name=""): """get (create if necessary) and then restore `tqdm_class`'s lock""" print("ensure_lock", tqdm_class, lock_name) old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock lock = old_lock or tqdm_class.get_lock() # maybe create a new lock lock = getattr(lock, lock_name, lock) # maybe subtype tqdm_class.set_lock(lock) yield lock if old_lock is None: del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class. else: tqdm_class.set_lock(old_lock) ``` But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205 ```python class _tqdm_cls: def __call__(self, *args, disable=False, **kwargs): if _tqdm_active and not disable: return tqdm_lib.tqdm(*args, **kwargs) else: return EmptyTqdm(*args, **kwargs) def set_lock(self, *args, **kwargs): self._lock = None if _tqdm_active: return tqdm_lib.tqdm.set_lock(*args, **kwargs) def get_lock(self): if _tqdm_active: return tqdm_lib.tqdm.get_lock() ``` ### Environment info Python 3.11.4 tqdm '4.65.0' datasets master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6066/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6065/comments
https://api.github.com/repos/huggingface/datasets/issues/6065/events
https://github.com/huggingface/datasets/pull/6065
1,819,334,932
PR_kwDODunzps5WR8jI
6,065
Add column type guessing from map return function
{ "login": "piercefreeman", "id": 1712066, "node_id": "MDQ6VXNlcjE3MTIwNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/1712066?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piercefreeman", "html_url": "https://github.com/piercefreeman", "followers_url": "https://api.github.com/users/piercefreeman/followers", "following_url": "https://api.github.com/users/piercefreeman/following{/other_user}", "gists_url": "https://api.github.com/users/piercefreeman/gists{/gist_id}", "starred_url": "https://api.github.com/users/piercefreeman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piercefreeman/subscriptions", "organizations_url": "https://api.github.com/users/piercefreeman/orgs", "repos_url": "https://api.github.com/users/piercefreeman/repos", "events_url": "https://api.github.com/users/piercefreeman/events{/privacy}", "received_events_url": "https://api.github.com/users/piercefreeman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for working on this. However, having thought about this issue a bit more, supporting this doesn't seem like a good idea - it's better to be explicit than implicit, according to the Zen of Python 🙂. Also, I don't think many users would use this, so this raises the question of whether this is something we want to maintain.\r\n\r\ncc @lhoestq for the 2nd opinion", "@mariosasko I was going to quote the Zen of Python in the other direction :) To me, this actually is much more explicit than the current behavior of guessing pyarrow types based on the raw dictionary return values. Explicit typehinting is increasingly the de facto way to deal with this dynamic type serialization - plus it feels like a clearer fit to me than separating out the mapper function from the feature column definition in the call to the actual `.map()`. Another benefit is providing typehinting support for clients that use mypy or other static typecheckers to detect return mismatches.\r\n\r\nBut will leave it to you and @lhoestq to see if it's something you'd like in core versus a support package.", "I meant that explicitly specifying the target features (the `features` param) is cleaner (easier to track) than relying on type hints.", "Passing features= to `map()` is richer and more explicit. Also I don't think users would guess that such API exist.\r\n\r\nOther libraries like dask also infer the type from the output or requires the typing to be specified using the `meta` argument", "Point about discoverability is a fair one, would certainly need some docs around it. All good! Will close this out and keep in our extension utilities." ]
2023-07-25T00:34:17
2023-07-26T15:13:45
2023-07-26T15:13:44
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6065", "html_url": "https://github.com/huggingface/datasets/pull/6065", "diff_url": "https://github.com/huggingface/datasets/pull/6065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6065.patch", "merged_at": null }
As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset. This PR provides an alternative approach, which is functionally equivalent to specifying features but a bit cleaner within a larger mapping pipeline. It allows clients to typehint the return variable coming from the mapper function - if we find one of these type annotations specified, and no explicit features have been passed in, we'll try to convert it into a Features map. If the map function runs and casting is unable to succeed, it will raise a DatasetTransformationNotAllowedError that indicates the typehint may be to blame. It works for batched and non-batched mapping functions. Currently supported column types: - builtins primitives: string, int, float, bool - dictionaries, lists (nested and one-deep) - Optional types and None-Unions (synonymous with optional types) It's used like: ```python class DatasetTyped(TypedDict): texts: list[str] def dataset_typed_map(batch) -> DatasetTyped: return {"texts": [text.split() for text in batch["raw_text"]]} dataset = {"raw_text": ["", "This is a test", "This is another test"]} with Dataset.from_dict(dataset) as dset: new_dataset = dset.map( dataset_typed_map, batched=True, batch_size=1, num_proc=1, ) ``` Open questions: - Should logging indicate we have automatically guessed these types? Or proceed quietly until we hit an error (as is the current implementation).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6065/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6064/comments
https://api.github.com/repos/huggingface/datasets/issues/6064/events
https://github.com/huggingface/datasets/pull/6064
1,818,703,725
PR_kwDODunzps5WPzAv
6,064
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006704 / 0.011353 (-0.004649) | 0.004208 / 0.011008 (-0.006800) | 0.085895 / 0.038508 (0.047387) | 0.079303 / 0.023109 (0.056193) | 0.353430 / 0.275898 (0.077532) | 0.390814 / 0.323480 (0.067334) | 0.006565 / 0.007986 (-0.001420) | 0.003588 / 0.004328 (-0.000740) | 0.065249 / 0.004250 (0.060999) | 0.059772 / 0.037052 (0.022720) | 0.356315 / 0.258489 (0.097826) | 0.404812 / 0.293841 (0.110971) | 0.031127 / 0.128546 (-0.097419) | 0.008656 / 0.075646 (-0.066991) | 0.288734 / 0.419271 (-0.130537) | 0.053157 / 0.043533 (0.009625) | 0.354651 / 0.255139 (0.099512) | 0.370590 / 0.283200 (0.087391) | 0.024944 / 0.141683 (-0.116738) | 1.472393 / 1.452155 (0.020238) | 1.548946 / 1.492716 (0.056229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223430 / 0.018006 (0.205424) | 0.567359 / 0.000490 (0.566870) | 0.006744 / 0.000200 (0.006544) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030174 / 0.037411 (-0.007237) | 0.084865 / 0.014526 (0.070339) | 0.098986 / 0.176557 (-0.077571) | 0.161458 / 0.737135 (-0.575678) | 0.099198 / 0.296338 (-0.197141) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404324 / 0.215209 (0.189115) | 4.043744 / 2.077655 (1.966090) | 1.972834 / 1.504120 (0.468714) | 1.801634 / 1.541195 (0.260439) | 1.891198 / 1.468490 (0.422708) | 0.488511 / 4.584777 (-4.096266) | 3.566890 / 3.745712 (-0.178823) | 3.369415 / 5.269862 (-1.900447) | 2.054995 / 4.565676 (-2.510682) | 0.057225 / 0.424275 (-0.367050) | 0.007360 / 0.007607 (-0.000247) | 0.471813 / 0.226044 (0.245769) | 4.734397 / 2.268929 (2.465468) | 2.526585 / 55.444624 (-52.918039) | 2.230535 / 6.876477 (-4.645942) | 2.434403 / 2.142072 (0.292330) | 0.630090 / 4.805227 (-4.175137) | 0.138544 / 6.500664 (-6.362120) | 0.060099 / 0.075469 (-0.015370) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260951 / 1.841788 (-0.580837) | 20.051513 / 8.074308 (11.977204) | 14.675938 / 10.191392 (4.484546) | 0.169535 / 0.680424 (-0.510889) | 0.018574 / 0.534201 (-0.515627) | 0.394255 / 0.579283 (-0.185028) | 0.412713 / 0.434364 (-0.021651) | 0.475891 / 0.540337 (-0.064446) | 0.658223 / 1.386936 (-0.728713) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006969 / 0.011353 (-0.004384) | 0.004417 / 0.011008 (-0.006591) | 0.064399 / 0.038508 (0.025891) | 0.082928 / 0.023109 (0.059819) | 0.402285 / 0.275898 (0.126387) | 0.440032 / 0.323480 (0.116552) | 0.005896 / 0.007986 (-0.002090) | 0.003580 / 0.004328 (-0.000749) | 0.065340 / 0.004250 (0.061090) | 0.060363 / 0.037052 (0.023311) | 0.417413 / 0.258489 (0.158924) | 0.448527 / 0.293841 (0.154686) | 0.032238 / 0.128546 (-0.096308) | 0.008820 / 0.075646 (-0.066826) | 0.071516 / 0.419271 (-0.347755) | 0.050614 / 0.043533 (0.007081) | 0.406565 / 0.255139 (0.151426) | 0.422527 / 0.283200 (0.139328) | 0.025866 / 0.141683 (-0.115817) | 1.512256 / 1.452155 (0.060101) | 1.568433 / 1.492716 (0.075717) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266521 / 0.018006 (0.248515) | 0.564524 / 0.000490 (0.564034) | 0.005236 / 0.000200 (0.005036) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031998 / 0.037411 (-0.005413) | 0.090754 / 0.014526 (0.076229) | 0.105954 / 0.176557 (-0.070602) | 0.164506 / 0.737135 (-0.572629) | 0.108792 / 0.296338 (-0.187546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422044 / 0.215209 (0.206835) | 4.204449 / 2.077655 (2.126795) | 2.232060 / 1.504120 (0.727940) | 2.060389 / 1.541195 (0.519194) | 2.152723 / 1.468490 (0.684233) | 0.488456 / 4.584777 (-4.096321) | 3.591102 / 3.745712 (-0.154611) | 5.250401 / 5.269862 (-0.019461) | 3.060259 / 4.565676 (-1.505417) | 0.057558 / 0.424275 (-0.366717) | 0.007881 / 0.007607 (0.000274) | 0.508631 / 0.226044 (0.282587) | 5.064857 / 2.268929 (2.795928) | 2.719068 / 55.444624 (-52.725556) | 2.389992 / 6.876477 (-4.486485) | 2.595073 / 2.142072 (0.453000) | 0.590179 / 4.805227 (-4.215048) | 0.136149 / 6.500664 (-6.364515) | 0.062546 / 0.075469 (-0.012923) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369252 / 1.841788 (-0.472535) | 20.637580 / 8.074308 (12.563272) | 14.217129 / 10.191392 (4.025737) | 0.195464 / 0.680424 (-0.484960) | 0.018452 / 0.534201 (-0.515749) | 0.397044 / 0.579283 (-0.182239) | 0.401127 / 0.434364 (-0.033237) | 0.465033 / 0.540337 (-0.075305) | 0.613484 / 1.386936 (-0.773452) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d9f1651128e50e7887f5e8eaaf6b55fe4cd84fdc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004559) | 0.004374 / 0.011008 (-0.006635) | 0.084958 / 0.038508 (0.046450) | 0.080440 / 0.023109 (0.057331) | 0.317951 / 0.275898 (0.042053) | 0.376133 / 0.323480 (0.052653) | 0.005775 / 0.007986 (-0.002211) | 0.003644 / 0.004328 (-0.000684) | 0.064823 / 0.004250 (0.060573) | 0.059442 / 0.037052 (0.022390) | 0.319636 / 0.258489 (0.061147) | 0.389668 / 0.293841 (0.095827) | 0.031181 / 0.128546 (-0.097365) | 0.008725 / 0.075646 (-0.066921) | 0.288514 / 0.419271 (-0.130757) | 0.053466 / 0.043533 (0.009933) | 0.323131 / 0.255139 (0.067992) | 0.345276 / 0.283200 (0.062076) | 0.025046 / 0.141683 (-0.116637) | 1.491659 / 1.452155 (0.039504) | 1.562105 / 1.492716 (0.069389) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286325 / 0.018006 (0.268319) | 0.578021 / 0.000490 (0.577531) | 0.007240 / 0.000200 (0.007040) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030163 / 0.037411 (-0.007248) | 0.082100 / 0.014526 (0.067574) | 0.098331 / 0.176557 (-0.078225) | 0.160517 / 0.737135 (-0.576618) | 0.098479 / 0.296338 (-0.197859) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401782 / 0.215209 (0.186573) | 4.006330 / 2.077655 (1.928675) | 2.033841 / 1.504120 (0.529721) | 1.853248 / 1.541195 (0.312053) | 1.980046 / 1.468490 (0.511556) | 0.480636 / 4.584777 (-4.104141) | 3.684482 / 3.745712 (-0.061231) | 5.601940 / 5.269862 (0.332079) | 3.369683 / 4.565676 (-1.195993) | 0.057105 / 0.424275 (-0.367170) | 0.007462 / 0.007607 (-0.000145) | 0.474860 / 0.226044 (0.248815) | 4.749624 / 2.268929 (2.480695) | 2.492084 / 55.444624 (-52.952540) | 2.157985 / 6.876477 (-4.718491) | 2.420997 / 2.142072 (0.278925) | 0.574718 / 4.805227 (-4.230509) | 0.134672 / 6.500664 (-6.365992) | 0.061677 / 0.075469 (-0.013792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284151 / 1.841788 (-0.557637) | 20.186823 / 8.074308 (12.112515) | 14.247024 / 10.191392 (4.055632) | 0.171606 / 0.680424 (-0.508818) | 0.018619 / 0.534201 (-0.515582) | 0.394156 / 0.579283 (-0.185127) | 0.424684 / 0.434364 (-0.009679) | 0.476056 / 0.540337 (-0.064281) | 0.668751 / 1.386936 (-0.718185) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004142 / 0.011008 (-0.006867) | 0.065503 / 0.038508 (0.026995) | 0.083232 / 0.023109 (0.060122) | 0.378278 / 0.275898 (0.102380) | 0.410191 / 0.323480 (0.086711) | 0.005660 / 0.007986 (-0.002326) | 0.003486 / 0.004328 (-0.000842) | 0.066109 / 0.004250 (0.061859) | 0.059654 / 0.037052 (0.022601) | 0.375965 / 0.258489 (0.117476) | 0.420046 / 0.293841 (0.126205) | 0.031587 / 0.128546 (-0.096959) | 0.008693 / 0.075646 (-0.066953) | 0.071121 / 0.419271 (-0.348151) | 0.049468 / 0.043533 (0.005935) | 0.373785 / 0.255139 (0.118646) | 0.395577 / 0.283200 (0.112377) | 0.024138 / 0.141683 (-0.117545) | 1.465451 / 1.452155 (0.013297) | 1.547565 / 1.492716 (0.054849) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325241 / 0.018006 (0.307234) | 0.532415 / 0.000490 (0.531925) | 0.004755 / 0.000200 (0.004555) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033472 / 0.037411 (-0.003939) | 0.090574 / 0.014526 (0.076048) | 0.106712 / 0.176557 (-0.069845) | 0.164353 / 0.737135 (-0.572783) | 0.109344 / 0.296338 (-0.186994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420161 / 0.215209 (0.204952) | 4.192334 / 2.077655 (2.114679) | 2.178181 / 1.504120 (0.674061) | 2.017405 / 1.541195 (0.476211) | 2.182783 / 1.468490 (0.714293) | 0.484037 / 4.584777 (-4.100740) | 3.641911 / 3.745712 (-0.103801) | 5.543874 / 5.269862 (0.274013) | 3.440084 / 4.565676 (-1.125593) | 0.056662 / 0.424275 (-0.367614) | 0.007773 / 0.007607 (0.000166) | 0.498357 / 0.226044 (0.272313) | 4.951315 / 2.268929 (2.682386) | 2.656732 / 55.444624 (-52.787892) | 2.370566 / 6.876477 (-4.505910) | 2.682289 / 2.142072 (0.540217) | 0.598479 / 4.805227 (-4.206749) | 0.151546 / 6.500664 (-6.349118) | 0.063278 / 0.075469 (-0.012191) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.385897 / 1.841788 (-0.455891) | 20.961851 / 8.074308 (12.887543) | 14.465688 / 10.191392 (4.274296) | 0.166156 / 0.680424 (-0.514268) | 0.018848 / 0.534201 (-0.515353) | 0.401712 / 0.579283 (-0.177571) | 0.416674 / 0.434364 (-0.017690) | 0.471834 / 0.540337 (-0.068503) | 0.622463 / 1.386936 (-0.764473) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7e3ab9bc6ae8cc42f7e7d01afbd2637d51c3faf6 \"CML watermark\")\n" ]
2023-07-24T15:56:00
2023-07-24T16:05:19
2023-07-24T15:56:10
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6064", "html_url": "https://github.com/huggingface/datasets/pull/6064", "diff_url": "https://github.com/huggingface/datasets/pull/6064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6064.patch", "merged_at": "2023-07-24T15:56:10" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6064/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6063/comments
https://api.github.com/repos/huggingface/datasets/issues/6063/events
https://github.com/huggingface/datasets/pull/6063
1,818,679,485
PR_kwDODunzps5WPtxi
6,063
Release: 2.14.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003650) | 0.004699 / 0.011008 (-0.006309) | 0.090195 / 0.038508 (0.051687) | 0.119165 / 0.023109 (0.096056) | 0.361435 / 0.275898 (0.085537) | 0.404429 / 0.323480 (0.080949) | 0.006172 / 0.007986 (-0.001814) | 0.003932 / 0.004328 (-0.000397) | 0.068384 / 0.004250 (0.064133) | 0.066730 / 0.037052 (0.029678) | 0.360978 / 0.258489 (0.102489) | 0.401301 / 0.293841 (0.107460) | 0.032836 / 0.128546 (-0.095710) | 0.010821 / 0.075646 (-0.064825) | 0.294526 / 0.419271 (-0.124745) | 0.068751 / 0.043533 (0.025218) | 0.368427 / 0.255139 (0.113288) | 0.376969 / 0.283200 (0.093770) | 0.040538 / 0.141683 (-0.101145) | 1.509966 / 1.452155 (0.057811) | 1.564885 / 1.492716 (0.072169) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292243 / 0.018006 (0.274237) | 0.662067 / 0.000490 (0.661577) | 0.004966 / 0.000200 (0.004766) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029050 / 0.037411 (-0.008361) | 0.099880 / 0.014526 (0.085354) | 0.109277 / 0.176557 (-0.067280) | 0.167877 / 0.737135 (-0.569258) | 0.110770 / 0.296338 (-0.185569) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395742 / 0.215209 (0.180533) | 3.944152 / 2.077655 (1.866498) | 1.875295 / 1.504120 (0.371175) | 1.705088 / 1.541195 (0.163893) | 1.884443 / 1.468490 (0.415953) | 0.497243 / 4.584777 (-4.087534) | 3.749287 / 3.745712 (0.003575) | 4.418826 / 5.269862 (-0.851035) | 2.481149 / 4.565676 (-2.084528) | 0.058260 / 0.424275 (-0.366015) | 0.007744 / 0.007607 (0.000137) | 0.472531 / 0.226044 (0.246486) | 4.716022 / 2.268929 (2.447094) | 2.480446 / 55.444624 (-52.964179) | 2.163098 / 6.876477 (-4.713379) | 2.217555 / 2.142072 (0.075482) | 0.601965 / 4.805227 (-4.203262) | 0.139364 / 6.500664 (-6.361301) | 0.067097 / 0.075469 (-0.008372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330537 / 1.841788 (-0.511251) | 22.176270 / 8.074308 (14.101962) | 16.224981 / 10.191392 (6.033589) | 0.173708 / 0.680424 (-0.506715) | 0.019402 / 0.534201 (-0.514799) | 0.401994 / 0.579283 (-0.177289) | 0.432597 / 0.434364 (-0.001767) | 0.489933 / 0.540337 (-0.050404) | 0.672334 / 1.386936 (-0.714602) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002731) | 0.004609 / 0.011008 (-0.006399) | 0.067791 / 0.038508 (0.029283) | 0.112770 / 0.023109 (0.089661) | 0.380939 / 0.275898 (0.105041) | 0.416940 / 0.323480 (0.093460) | 0.006170 / 0.007986 (-0.001815) | 0.003876 / 0.004328 (-0.000452) | 0.066227 / 0.004250 (0.061976) | 0.073132 / 0.037052 (0.036080) | 0.390120 / 0.258489 (0.131631) | 0.420893 / 0.293841 (0.127052) | 0.033235 / 0.128546 (-0.095311) | 0.009659 / 0.075646 (-0.065987) | 0.072668 / 0.419271 (-0.346604) | 0.051333 / 0.043533 (0.007801) | 0.393828 / 0.255139 (0.138689) | 0.412376 / 0.283200 (0.129176) | 0.027760 / 0.141683 (-0.113923) | 1.494369 / 1.452155 (0.042214) | 1.592862 / 1.492716 (0.100145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.345376 / 0.018006 (0.327369) | 0.609399 / 0.000490 (0.608909) | 0.000546 / 0.000200 (0.000346) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035601 / 0.037411 (-0.001810) | 0.106527 / 0.014526 (0.092001) | 0.114388 / 0.176557 (-0.062168) | 0.175607 / 0.737135 (-0.561529) | 0.113009 / 0.296338 (-0.183329) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417237 / 0.215209 (0.202028) | 4.136329 / 2.077655 (2.058675) | 2.147134 / 1.504120 (0.643014) | 2.009501 / 1.541195 (0.468306) | 2.139499 / 1.468490 (0.671009) | 0.491593 / 4.584777 (-4.093184) | 3.766734 / 3.745712 (0.021022) | 5.652446 / 5.269862 (0.382585) | 3.021654 / 4.565676 (-1.544022) | 0.058458 / 0.424275 (-0.365817) | 0.008271 / 0.007607 (0.000664) | 0.488229 / 0.226044 (0.262184) | 4.861343 / 2.268929 (2.592415) | 2.694142 / 55.444624 (-52.750482) | 2.489130 / 6.876477 (-4.387346) | 2.679376 / 2.142072 (0.537304) | 0.589959 / 4.805227 (-4.215268) | 0.137939 / 6.500664 (-6.362725) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444871 / 1.841788 (-0.396916) | 22.874961 / 8.074308 (14.800653) | 15.842130 / 10.191392 (5.650738) | 0.175529 / 0.680424 (-0.504895) | 0.019024 / 0.534201 (-0.515177) | 0.406551 / 0.579283 (-0.172732) | 0.430335 / 0.434364 (-0.004029) | 0.475750 / 0.540337 (-0.064587) | 0.624836 / 1.386936 (-0.762100) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dabbb7467f49fd22ae1a43cc577eb43008d63ee8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006068 / 0.011353 (-0.005285) | 0.003694 / 0.011008 (-0.007315) | 0.080321 / 0.038508 (0.041813) | 0.061738 / 0.023109 (0.038629) | 0.329675 / 0.275898 (0.053777) | 0.364008 / 0.323480 (0.040528) | 0.004722 / 0.007986 (-0.003263) | 0.002857 / 0.004328 (-0.001471) | 0.062447 / 0.004250 (0.058197) | 0.047006 / 0.037052 (0.009953) | 0.335730 / 0.258489 (0.077241) | 0.373047 / 0.293841 (0.079206) | 0.027273 / 0.128546 (-0.101274) | 0.007979 / 0.075646 (-0.067667) | 0.262693 / 0.419271 (-0.156579) | 0.045416 / 0.043533 (0.001883) | 0.340774 / 0.255139 (0.085635) | 0.359667 / 0.283200 (0.076468) | 0.020848 / 0.141683 (-0.120835) | 1.450110 / 1.452155 (-0.002045) | 1.489511 / 1.492716 (-0.003206) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185090 / 0.018006 (0.167084) | 0.429823 / 0.000490 (0.429334) | 0.000703 / 0.000200 (0.000503) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024398 / 0.037411 (-0.013013) | 0.072983 / 0.014526 (0.058457) | 0.084012 / 0.176557 (-0.092544) | 0.146160 / 0.737135 (-0.590975) | 0.084068 / 0.296338 (-0.212270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432204 / 0.215209 (0.216995) | 4.320593 / 2.077655 (2.242939) | 2.261260 / 1.504120 (0.757140) | 2.087148 / 1.541195 (0.545954) | 2.144520 / 1.468490 (0.676029) | 0.501477 / 4.584777 (-4.083300) | 3.119557 / 3.745712 (-0.626156) | 3.572527 / 5.269862 (-1.697335) | 2.208836 / 4.565676 (-2.356840) | 0.057232 / 0.424275 (-0.367043) | 0.006494 / 0.007607 (-0.001113) | 0.508135 / 0.226044 (0.282091) | 5.090416 / 2.268929 (2.821488) | 2.739800 / 55.444624 (-52.704824) | 2.416105 / 6.876477 (-4.460372) | 2.616037 / 2.142072 (0.473965) | 0.583730 / 4.805227 (-4.221497) | 0.124312 / 6.500664 (-6.376352) | 0.060760 / 0.075469 (-0.014709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256097 / 1.841788 (-0.585691) | 18.326073 / 8.074308 (10.251765) | 13.859173 / 10.191392 (3.667781) | 0.143639 / 0.680424 (-0.536785) | 0.016649 / 0.534201 (-0.517552) | 0.331671 / 0.579283 (-0.247612) | 0.365370 / 0.434364 (-0.068994) | 0.392753 / 0.540337 (-0.147584) | 0.549302 / 1.386936 (-0.837634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003641 / 0.011008 (-0.007367) | 0.063109 / 0.038508 (0.024601) | 0.060482 / 0.023109 (0.037372) | 0.404047 / 0.275898 (0.128149) | 0.425436 / 0.323480 (0.101956) | 0.004603 / 0.007986 (-0.003382) | 0.002905 / 0.004328 (-0.001423) | 0.063207 / 0.004250 (0.058956) | 0.048248 / 0.037052 (0.011196) | 0.404325 / 0.258489 (0.145836) | 0.432652 / 0.293841 (0.138811) | 0.027630 / 0.128546 (-0.100916) | 0.008062 / 0.075646 (-0.067584) | 0.068367 / 0.419271 (-0.350905) | 0.042169 / 0.043533 (-0.001364) | 0.384903 / 0.255139 (0.129764) | 0.418617 / 0.283200 (0.135417) | 0.020767 / 0.141683 (-0.120915) | 1.463606 / 1.452155 (0.011451) | 1.512081 / 1.492716 (0.019365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229601 / 0.018006 (0.211594) | 0.417878 / 0.000490 (0.417388) | 0.000373 / 0.000200 (0.000173) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026354 / 0.037411 (-0.011057) | 0.078100 / 0.014526 (0.063574) | 0.087122 / 0.176557 (-0.089434) | 0.140017 / 0.737135 (-0.597118) | 0.089923 / 0.296338 (-0.206415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422405 / 0.215209 (0.207196) | 4.237383 / 2.077655 (2.159728) | 2.161104 / 1.504120 (0.656984) | 1.982337 / 1.541195 (0.441142) | 2.050216 / 1.468490 (0.581726) | 0.499281 / 4.584777 (-4.085496) | 2.996953 / 3.745712 (-0.748759) | 5.027069 / 5.269862 (-0.242792) | 2.804703 / 4.565676 (-1.760974) | 0.057707 / 0.424275 (-0.366568) | 0.006809 / 0.007607 (-0.000798) | 0.495196 / 0.226044 (0.269152) | 4.946593 / 2.268929 (2.677665) | 2.598965 / 55.444624 (-52.845660) | 2.349871 / 6.876477 (-4.526606) | 2.451665 / 2.142072 (0.309593) | 0.592314 / 4.805227 (-4.212913) | 0.125685 / 6.500664 (-6.374979) | 0.063252 / 0.075469 (-0.012217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325422 / 1.841788 (-0.516366) | 18.521059 / 8.074308 (10.446751) | 14.046757 / 10.191392 (3.855365) | 0.133009 / 0.680424 (-0.547415) | 0.017097 / 0.534201 (-0.517104) | 0.339804 / 0.579283 (-0.239479) | 0.345464 / 0.434364 (-0.088900) | 0.387623 / 0.540337 (-0.152714) | 0.519880 / 1.386936 (-0.867056) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88896a7b28610ace95e444b94f9a4bc332cc1ee3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008671 / 0.011353 (-0.002682) | 0.004681 / 0.011008 (-0.006327) | 0.107517 / 0.038508 (0.069008) | 0.078846 / 0.023109 (0.055737) | 0.449745 / 0.275898 (0.173847) | 0.504075 / 0.323480 (0.180596) | 0.005837 / 0.007986 (-0.002148) | 0.004031 / 0.004328 (-0.000297) | 0.092021 / 0.004250 (0.087771) | 0.065954 / 0.037052 (0.028902) | 0.442082 / 0.258489 (0.183593) | 0.529349 / 0.293841 (0.235508) | 0.052527 / 0.128546 (-0.076019) | 0.013854 / 0.075646 (-0.061792) | 0.367315 / 0.419271 (-0.051956) | 0.068731 / 0.043533 (0.025199) | 0.494733 / 0.255139 (0.239594) | 0.472801 / 0.283200 (0.189601) | 0.036791 / 0.141683 (-0.104892) | 1.877648 / 1.452155 (0.425493) | 1.928399 / 1.492716 (0.435683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231910 / 0.018006 (0.213904) | 0.553464 / 0.000490 (0.552974) | 0.011915 / 0.000200 (0.011715) | 0.000378 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028232 / 0.037411 (-0.009179) | 0.091441 / 0.014526 (0.076916) | 0.110394 / 0.176557 (-0.066162) | 0.187638 / 0.737135 (-0.549497) | 0.111810 / 0.296338 (-0.184529) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.599987 / 0.215209 (0.384778) | 6.008709 / 2.077655 (3.931054) | 2.518769 / 1.504120 (1.014650) | 2.197029 / 1.541195 (0.655834) | 2.217165 / 1.468490 (0.748675) | 0.894939 / 4.584777 (-3.689837) | 5.001217 / 3.745712 (1.255505) | 4.636482 / 5.269862 (-0.633379) | 3.237613 / 4.565676 (-1.328063) | 0.104227 / 0.424275 (-0.320048) | 0.008504 / 0.007607 (0.000897) | 0.750190 / 0.226044 (0.524145) | 7.514571 / 2.268929 (5.245642) | 3.358003 / 55.444624 (-52.086621) | 2.585649 / 6.876477 (-4.290827) | 2.731129 / 2.142072 (0.589056) | 1.088828 / 4.805227 (-3.716400) | 0.217308 / 6.500664 (-6.283356) | 0.076410 / 0.075469 (0.000941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620087 / 1.841788 (-0.221701) | 23.145743 / 8.074308 (15.071435) | 20.583403 / 10.191392 (10.392011) | 0.225467 / 0.680424 (-0.454956) | 0.029063 / 0.534201 (-0.505138) | 0.480563 / 0.579283 (-0.098720) | 0.539083 / 0.434364 (0.104719) | 0.563787 / 0.540337 (0.023449) | 0.782902 / 1.386936 (-0.604034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010113 / 0.011353 (-0.001239) | 0.004997 / 0.011008 (-0.006011) | 0.082974 / 0.038508 (0.044466) | 0.090375 / 0.023109 (0.067266) | 0.440273 / 0.275898 (0.164375) | 0.476939 / 0.323480 (0.153459) | 0.005955 / 0.007986 (-0.002031) | 0.004375 / 0.004328 (0.000046) | 0.080459 / 0.004250 (0.076209) | 0.061787 / 0.037052 (0.024734) | 0.477211 / 0.258489 (0.218722) | 0.487164 / 0.293841 (0.193323) | 0.054198 / 0.128546 (-0.074348) | 0.013945 / 0.075646 (-0.061701) | 0.093006 / 0.419271 (-0.326266) | 0.062685 / 0.043533 (0.019152) | 0.461373 / 0.255139 (0.206234) | 0.475766 / 0.283200 (0.192567) | 0.032059 / 0.141683 (-0.109623) | 1.857989 / 1.452155 (0.405834) | 1.837993 / 1.492716 (0.345277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243048 / 0.018006 (0.225042) | 0.535850 / 0.000490 (0.535360) | 0.007204 / 0.000200 (0.007004) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032584 / 0.037411 (-0.004827) | 0.098151 / 0.014526 (0.083625) | 0.109691 / 0.176557 (-0.066866) | 0.172803 / 0.737135 (-0.564333) | 0.110469 / 0.296338 (-0.185869) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635086 / 0.215209 (0.419877) | 6.500864 / 2.077655 (4.423210) | 2.996727 / 1.504120 (1.492607) | 2.537218 / 1.541195 (0.996023) | 2.572310 / 1.468490 (1.103820) | 0.870868 / 4.584777 (-3.713909) | 4.989744 / 3.745712 (1.244032) | 4.422174 / 5.269862 (-0.847687) | 2.935874 / 4.565676 (-1.629803) | 0.097118 / 0.424275 (-0.327157) | 0.009360 / 0.007607 (0.001753) | 0.790447 / 0.226044 (0.564403) | 7.859519 / 2.268929 (5.590591) | 3.975616 / 55.444624 (-51.469009) | 3.018271 / 6.876477 (-3.858206) | 3.111173 / 2.142072 (0.969101) | 1.085577 / 4.805227 (-3.719651) | 0.225719 / 6.500664 (-6.274945) | 0.080576 / 0.075469 (0.005107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.802284 / 1.841788 (-0.039504) | 23.487921 / 8.074308 (15.413613) | 20.595171 / 10.191392 (10.403779) | 0.196610 / 0.680424 (-0.483814) | 0.027483 / 0.534201 (-0.506718) | 0.485840 / 0.579283 (-0.093443) | 0.542661 / 0.434364 (0.108297) | 0.580602 / 0.540337 (0.040265) | 0.768195 / 1.386936 (-0.618741) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88896a7b28610ace95e444b94f9a4bc332cc1ee3 \"CML watermark\")\n" ]
2023-07-24T15:41:19
2023-07-24T16:05:16
2023-07-24T15:47:51
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6063", "html_url": "https://github.com/huggingface/datasets/pull/6063", "diff_url": "https://github.com/huggingface/datasets/pull/6063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6063.patch", "merged_at": "2023-07-24T15:47:51" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6063/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6062/comments
https://api.github.com/repos/huggingface/datasets/issues/6062/events
https://github.com/huggingface/datasets/pull/6062
1,818,341,584
PR_kwDODunzps5WOj62
6,062
Improve `Dataset.from_list` docstring
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008340 / 0.011353 (-0.003013) | 0.005053 / 0.011008 (-0.005955) | 0.103294 / 0.038508 (0.064786) | 0.069417 / 0.023109 (0.046308) | 0.436922 / 0.275898 (0.161024) | 0.461348 / 0.323480 (0.137868) | 0.006030 / 0.007986 (-0.001955) | 0.003727 / 0.004328 (-0.000601) | 0.076384 / 0.004250 (0.072134) | 0.056742 / 0.037052 (0.019689) | 0.439996 / 0.258489 (0.181507) | 0.469417 / 0.293841 (0.175577) | 0.044343 / 0.128546 (-0.084203) | 0.012634 / 0.075646 (-0.063013) | 0.359746 / 0.419271 (-0.059525) | 0.064842 / 0.043533 (0.021309) | 0.425960 / 0.255139 (0.170821) | 0.458568 / 0.283200 (0.175368) | 0.039802 / 0.141683 (-0.101881) | 1.687320 / 1.452155 (0.235165) | 1.806212 / 1.492716 (0.313496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255484 / 0.018006 (0.237478) | 0.563039 / 0.000490 (0.562549) | 0.000445 / 0.000200 (0.000245) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027511 / 0.037411 (-0.009900) | 0.089185 / 0.014526 (0.074659) | 0.098397 / 0.176557 (-0.078160) | 0.163897 / 0.737135 (-0.573238) | 0.099905 / 0.296338 (-0.196434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612737 / 0.215209 (0.397528) | 6.209948 / 2.077655 (4.132294) | 2.756060 / 1.504120 (1.251940) | 2.402115 / 1.541195 (0.860920) | 2.422665 / 1.468490 (0.954175) | 0.834799 / 4.584777 (-3.749977) | 5.251699 / 3.745712 (1.505986) | 5.554141 / 5.269862 (0.284280) | 3.254699 / 4.565676 (-1.310977) | 0.095697 / 0.424275 (-0.328578) | 0.009406 / 0.007607 (0.001799) | 0.729025 / 0.226044 (0.502980) | 7.195521 / 2.268929 (4.926593) | 3.360264 / 55.444624 (-52.084361) | 2.696764 / 6.876477 (-4.179713) | 2.702796 / 2.142072 (0.560724) | 0.974420 / 4.805227 (-3.830808) | 0.195215 / 6.500664 (-6.305450) | 0.069754 / 0.075469 (-0.005715) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553458 / 1.841788 (-0.288330) | 21.972436 / 8.074308 (13.898128) | 20.027392 / 10.191392 (9.836000) | 0.216950 / 0.680424 (-0.463474) | 0.032196 / 0.534201 (-0.502005) | 0.449884 / 0.579283 (-0.129399) | 0.586213 / 0.434364 (0.151849) | 0.537227 / 0.540337 (-0.003111) | 0.751022 / 1.386936 (-0.635914) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007859 / 0.011353 (-0.003493) | 0.004762 / 0.011008 (-0.006246) | 0.086023 / 0.038508 (0.047515) | 0.069218 / 0.023109 (0.046109) | 0.449312 / 0.275898 (0.173414) | 0.481687 / 0.323480 (0.158207) | 0.006318 / 0.007986 (-0.001668) | 0.004063 / 0.004328 (-0.000266) | 0.076917 / 0.004250 (0.072667) | 0.058034 / 0.037052 (0.020981) | 0.474265 / 0.258489 (0.215775) | 0.497736 / 0.293841 (0.203895) | 0.044587 / 0.128546 (-0.083959) | 0.013880 / 0.075646 (-0.061766) | 0.089233 / 0.419271 (-0.330038) | 0.058760 / 0.043533 (0.015227) | 0.439515 / 0.255139 (0.184376) | 0.473246 / 0.283200 (0.190047) | 0.042968 / 0.141683 (-0.098715) | 1.802647 / 1.452155 (0.350493) | 1.778563 / 1.492716 (0.285847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.343741 / 0.018006 (0.325735) | 0.567409 / 0.000490 (0.566919) | 0.029727 / 0.000200 (0.029527) | 0.000147 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031021 / 0.037411 (-0.006390) | 0.096659 / 0.014526 (0.082133) | 0.103341 / 0.176557 (-0.073215) | 0.169893 / 0.737135 (-0.567242) | 0.103280 / 0.296338 (-0.193058) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584724 / 0.215209 (0.369515) | 5.792596 / 2.077655 (3.714941) | 2.683133 / 1.504120 (1.179013) | 2.367837 / 1.541195 (0.826643) | 2.378567 / 1.468490 (0.910076) | 0.803427 / 4.584777 (-3.781350) | 5.179017 / 3.745712 (1.433305) | 4.446323 / 5.269862 (-0.823538) | 2.771731 / 4.565676 (-1.793945) | 0.100943 / 0.424275 (-0.323332) | 0.009875 / 0.007607 (0.002268) | 0.725260 / 0.226044 (0.499216) | 7.149728 / 2.268929 (4.880800) | 3.646438 / 55.444624 (-51.798187) | 2.793858 / 6.876477 (-4.082618) | 2.971966 / 2.142072 (0.829894) | 0.998147 / 4.805227 (-3.807080) | 0.198004 / 6.500664 (-6.302660) | 0.072581 / 0.075469 (-0.002888) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696737 / 1.841788 (-0.145051) | 22.615193 / 8.074308 (14.540884) | 20.272421 / 10.191392 (10.081029) | 0.237459 / 0.680424 (-0.442965) | 0.034774 / 0.534201 (-0.499427) | 0.484649 / 0.579283 (-0.094634) | 0.590263 / 0.434364 (0.155899) | 0.547833 / 0.540337 (0.007495) | 0.762109 / 1.386936 (-0.624827) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bc3628b5a8f71ad7cfc014d8ba5e798f26becb7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011183 / 0.011353 (-0.000170) | 0.005267 / 0.011008 (-0.005741) | 0.108506 / 0.038508 (0.069997) | 0.083541 / 0.023109 (0.060431) | 0.452189 / 0.275898 (0.176291) | 0.496229 / 0.323480 (0.172749) | 0.004951 / 0.007986 (-0.003035) | 0.004452 / 0.004328 (0.000124) | 0.085133 / 0.004250 (0.080883) | 0.061291 / 0.037052 (0.024239) | 0.450453 / 0.258489 (0.191964) | 0.506456 / 0.293841 (0.212616) | 0.049784 / 0.128546 (-0.078762) | 0.014738 / 0.075646 (-0.060908) | 0.372603 / 0.419271 (-0.046669) | 0.065223 / 0.043533 (0.021690) | 0.467872 / 0.255139 (0.212733) | 0.500062 / 0.283200 (0.216862) | 0.040911 / 0.141683 (-0.100772) | 1.852970 / 1.452155 (0.400816) | 2.016996 / 1.492716 (0.524280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262620 / 0.018006 (0.244614) | 0.593925 / 0.000490 (0.593435) | 0.000413 / 0.000200 (0.000213) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035713 / 0.037411 (-0.001698) | 0.111403 / 0.014526 (0.096878) | 0.117259 / 0.176557 (-0.059298) | 0.201545 / 0.737135 (-0.535590) | 0.133111 / 0.296338 (-0.163228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.597318 / 0.215209 (0.382109) | 5.882691 / 2.077655 (3.805036) | 2.572203 / 1.504120 (1.068083) | 2.248016 / 1.541195 (0.706821) | 2.359103 / 1.468490 (0.890613) | 0.852023 / 4.584777 (-3.732754) | 5.270831 / 3.745712 (1.525119) | 4.712915 / 5.269862 (-0.556947) | 3.124295 / 4.565676 (-1.441381) | 0.092045 / 0.424275 (-0.332230) | 0.007834 / 0.007607 (0.000227) | 0.695711 / 0.226044 (0.469666) | 7.011760 / 2.268929 (4.742831) | 3.333300 / 55.444624 (-52.111325) | 2.745889 / 6.876477 (-4.130587) | 3.153458 / 2.142072 (1.011385) | 1.011089 / 4.805227 (-3.794139) | 0.207467 / 6.500664 (-6.293197) | 0.079802 / 0.075469 (0.004333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.703784 / 1.841788 (-0.138003) | 24.414340 / 8.074308 (16.340032) | 22.534528 / 10.191392 (12.343136) | 0.276129 / 0.680424 (-0.404295) | 0.027954 / 0.534201 (-0.506247) | 0.484261 / 0.579283 (-0.095022) | 0.605316 / 0.434364 (0.170952) | 0.557219 / 0.540337 (0.016882) | 0.802209 / 1.386936 (-0.584727) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009109 / 0.011353 (-0.002244) | 0.005376 / 0.011008 (-0.005632) | 0.085141 / 0.038508 (0.046633) | 0.100560 / 0.023109 (0.077450) | 0.482673 / 0.275898 (0.206775) | 0.551582 / 0.323480 (0.228103) | 0.006756 / 0.007986 (-0.001229) | 0.004171 / 0.004328 (-0.000158) | 0.084184 / 0.004250 (0.079933) | 0.069283 / 0.037052 (0.032230) | 0.517722 / 0.258489 (0.259233) | 0.542641 / 0.293841 (0.248801) | 0.047790 / 0.128546 (-0.080756) | 0.014063 / 0.075646 (-0.061583) | 0.110591 / 0.419271 (-0.308680) | 0.064373 / 0.043533 (0.020840) | 0.496636 / 0.255139 (0.241497) | 0.551906 / 0.283200 (0.268707) | 0.046187 / 0.141683 (-0.095496) | 1.864836 / 1.452155 (0.412681) | 1.923765 / 1.492716 (0.431049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286558 / 0.018006 (0.268552) | 0.610353 / 0.000490 (0.609863) | 0.012647 / 0.000200 (0.012447) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037099 / 0.037411 (-0.000313) | 0.108608 / 0.014526 (0.094082) | 0.120386 / 0.176557 (-0.056170) | 0.183450 / 0.737135 (-0.553686) | 0.124860 / 0.296338 (-0.171479) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629006 / 0.215209 (0.413797) | 6.309206 / 2.077655 (4.231551) | 2.878558 / 1.504120 (1.374438) | 2.616093 / 1.541195 (1.074898) | 2.668096 / 1.468490 (1.199606) | 0.865732 / 4.584777 (-3.719045) | 5.312433 / 3.745712 (1.566721) | 4.799352 / 5.269862 (-0.470509) | 3.142207 / 4.565676 (-1.423469) | 0.099591 / 0.424275 (-0.324684) | 0.009159 / 0.007607 (0.001552) | 0.730999 / 0.226044 (0.504954) | 7.486442 / 2.268929 (5.217513) | 3.657699 / 55.444624 (-51.786925) | 3.080094 / 6.876477 (-3.796383) | 3.320976 / 2.142072 (1.178904) | 1.089324 / 4.805227 (-3.715904) | 0.222831 / 6.500664 (-6.277833) | 0.083976 / 0.075469 (0.008507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.793181 / 1.841788 (-0.048607) | 25.307444 / 8.074308 (17.233136) | 21.321713 / 10.191392 (11.130321) | 0.216326 / 0.680424 (-0.464098) | 0.034298 / 0.534201 (-0.499903) | 0.497173 / 0.579283 (-0.082110) | 0.643550 / 0.434364 (0.209186) | 0.581213 / 0.540337 (0.040876) | 0.830973 / 1.386936 (-0.555963) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#24875bb8494c3a7803182b08c70747b1b1a6bf4d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006886 / 0.011353 (-0.004467) | 0.004267 / 0.011008 (-0.006741) | 0.086182 / 0.038508 (0.047674) | 0.083405 / 0.023109 (0.060296) | 0.313717 / 0.275898 (0.037819) | 0.351476 / 0.323480 (0.027996) | 0.005702 / 0.007986 (-0.002284) | 0.003802 / 0.004328 (-0.000526) | 0.065759 / 0.004250 (0.061508) | 0.060056 / 0.037052 (0.023003) | 0.315871 / 0.258489 (0.057382) | 0.364520 / 0.293841 (0.070679) | 0.032067 / 0.128546 (-0.096479) | 0.008679 / 0.075646 (-0.066967) | 0.294968 / 0.419271 (-0.124303) | 0.054684 / 0.043533 (0.011152) | 0.314124 / 0.255139 (0.058985) | 0.337312 / 0.283200 (0.054113) | 0.025051 / 0.141683 (-0.116632) | 1.505242 / 1.452155 (0.053087) | 1.608263 / 1.492716 (0.115547) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266562 / 0.018006 (0.248556) | 0.579887 / 0.000490 (0.579397) | 0.004161 / 0.000200 (0.003961) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031153 / 0.037411 (-0.006258) | 0.087703 / 0.014526 (0.073177) | 0.103864 / 0.176557 (-0.072693) | 0.159032 / 0.737135 (-0.578104) | 0.102482 / 0.296338 (-0.193857) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405805 / 0.215209 (0.190596) | 4.050669 / 2.077655 (1.973014) | 2.064384 / 1.504120 (0.560264) | 1.892825 / 1.541195 (0.351630) | 2.001083 / 1.468490 (0.532593) | 0.478174 / 4.584777 (-4.106603) | 3.542580 / 3.745712 (-0.203132) | 3.319205 / 5.269862 (-1.950656) | 2.075868 / 4.565676 (-2.489808) | 0.057345 / 0.424275 (-0.366930) | 0.007459 / 0.007607 (-0.000148) | 0.483564 / 0.226044 (0.257520) | 4.827746 / 2.268929 (2.558818) | 2.579541 / 55.444624 (-52.865083) | 2.205125 / 6.876477 (-4.671352) | 2.489206 / 2.142072 (0.347133) | 0.575843 / 4.805227 (-4.229384) | 0.133010 / 6.500664 (-6.367654) | 0.061082 / 0.075469 (-0.014387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286059 / 1.841788 (-0.555729) | 20.575173 / 8.074308 (12.500865) | 14.351692 / 10.191392 (4.160300) | 0.150401 / 0.680424 (-0.530022) | 0.018678 / 0.534201 (-0.515523) | 0.397860 / 0.579283 (-0.181423) | 0.419474 / 0.434364 (-0.014890) | 0.474492 / 0.540337 (-0.065846) | 0.659510 / 1.386936 (-0.727426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006948 / 0.011353 (-0.004405) | 0.004305 / 0.011008 (-0.006703) | 0.064220 / 0.038508 (0.025712) | 0.083251 / 0.023109 (0.060142) | 0.388148 / 0.275898 (0.112250) | 0.417834 / 0.323480 (0.094354) | 0.005762 / 0.007986 (-0.002224) | 0.003803 / 0.004328 (-0.000525) | 0.066365 / 0.004250 (0.062114) | 0.061808 / 0.037052 (0.024756) | 0.390889 / 0.258489 (0.132400) | 0.430619 / 0.293841 (0.136778) | 0.031777 / 0.128546 (-0.096770) | 0.008781 / 0.075646 (-0.066865) | 0.070844 / 0.419271 (-0.348427) | 0.050552 / 0.043533 (0.007019) | 0.378420 / 0.255139 (0.123281) | 0.403273 / 0.283200 (0.120074) | 0.024578 / 0.141683 (-0.117105) | 1.494790 / 1.452155 (0.042636) | 1.549408 / 1.492716 (0.056692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302668 / 0.018006 (0.284662) | 0.542235 / 0.000490 (0.541746) | 0.001847 / 0.000200 (0.001647) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031947 / 0.037411 (-0.005465) | 0.092220 / 0.014526 (0.077694) | 0.104525 / 0.176557 (-0.072031) | 0.162000 / 0.737135 (-0.575135) | 0.106795 / 0.296338 (-0.189543) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412035 / 0.215209 (0.196826) | 4.106527 / 2.077655 (2.028872) | 2.111529 / 1.504120 (0.607409) | 1.953201 / 1.541195 (0.412006) | 2.079258 / 1.468490 (0.610768) | 0.479562 / 4.584777 (-4.105215) | 3.606256 / 3.745712 (-0.139456) | 5.175250 / 5.269862 (-0.094612) | 3.292465 / 4.565676 (-1.273212) | 0.057726 / 0.424275 (-0.366549) | 0.008247 / 0.007607 (0.000640) | 0.486143 / 0.226044 (0.260098) | 4.859051 / 2.268929 (2.590123) | 2.675629 / 55.444624 (-52.768995) | 2.267448 / 6.876477 (-4.609029) | 2.567639 / 2.142072 (0.425567) | 0.580822 / 4.805227 (-4.224406) | 0.134942 / 6.500664 (-6.365722) | 0.063825 / 0.075469 (-0.011644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334421 / 1.841788 (-0.507367) | 20.481428 / 8.074308 (12.407120) | 14.227943 / 10.191392 (4.036551) | 0.170711 / 0.680424 (-0.509713) | 0.018212 / 0.534201 (-0.515989) | 0.397212 / 0.579283 (-0.182071) | 0.411934 / 0.434364 (-0.022430) | 0.478019 / 0.540337 (-0.062319) | 0.645434 / 1.386936 (-0.741502) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef3d3f10886e23a65cce3bfd939b8ec0d5a5c2c1 \"CML watermark\")\n" ]
2023-07-24T12:36:38
2023-07-24T14:43:48
2023-07-24T14:34:43
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6062", "html_url": "https://github.com/huggingface/datasets/pull/6062", "diff_url": "https://github.com/huggingface/datasets/pull/6062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6062.patch", "merged_at": "2023-07-24T14:34:43" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6062/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6061/comments
https://api.github.com/repos/huggingface/datasets/issues/6061/events
https://github.com/huggingface/datasets/pull/6061
1,818,337,136
PR_kwDODunzps5WOi79
6,061
Dill 3.7 support
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007700 / 0.011353 (-0.003653) | 0.004680 / 0.011008 (-0.006328) | 0.098812 / 0.038508 (0.060304) | 0.085062 / 0.023109 (0.061952) | 0.371472 / 0.275898 (0.095574) | 0.412552 / 0.323480 (0.089072) | 0.004700 / 0.007986 (-0.003285) | 0.003765 / 0.004328 (-0.000564) | 0.074267 / 0.004250 (0.070017) | 0.063003 / 0.037052 (0.025951) | 0.391842 / 0.258489 (0.133353) | 0.436955 / 0.293841 (0.143114) | 0.035291 / 0.128546 (-0.093255) | 0.009309 / 0.075646 (-0.066338) | 0.313097 / 0.419271 (-0.106174) | 0.060098 / 0.043533 (0.016565) | 0.350726 / 0.255139 (0.095587) | 0.402692 / 0.283200 (0.119493) | 0.029321 / 0.141683 (-0.112361) | 1.671806 / 1.452155 (0.219651) | 1.743760 / 1.492716 (0.251044) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242281 / 0.018006 (0.224275) | 0.505054 / 0.000490 (0.504564) | 0.006595 / 0.000200 (0.006395) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032174 / 0.037411 (-0.005238) | 0.094483 / 0.014526 (0.079957) | 0.108527 / 0.176557 (-0.068030) | 0.178983 / 0.737135 (-0.558152) | 0.113766 / 0.296338 (-0.182572) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419764 / 0.215209 (0.204555) | 4.282650 / 2.077655 (2.204995) | 2.075325 / 1.504120 (0.571205) | 1.897668 / 1.541195 (0.356473) | 2.027109 / 1.468490 (0.558619) | 0.519983 / 4.584777 (-4.064794) | 4.134603 / 3.745712 (0.388891) | 6.586711 / 5.269862 (1.316849) | 3.811726 / 4.565676 (-0.753951) | 0.058628 / 0.424275 (-0.365647) | 0.007586 / 0.007607 (-0.000021) | 0.502180 / 0.226044 (0.276136) | 5.101588 / 2.268929 (2.832660) | 2.534295 / 55.444624 (-52.910330) | 2.220170 / 6.876477 (-4.656307) | 2.441110 / 2.142072 (0.299038) | 0.644775 / 4.805227 (-4.160452) | 0.144716 / 6.500664 (-6.355948) | 0.067018 / 0.075469 (-0.008451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.431279 / 1.841788 (-0.410508) | 21.947814 / 8.074308 (13.873506) | 15.548236 / 10.191392 (5.356844) | 0.174774 / 0.680424 (-0.505650) | 0.021182 / 0.534201 (-0.513019) | 0.441320 / 0.579283 (-0.137963) | 0.476685 / 0.434364 (0.042321) | 0.506277 / 0.540337 (-0.034060) | 0.809943 / 1.386936 (-0.576993) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007172 / 0.011353 (-0.004181) | 0.004358 / 0.011008 (-0.006650) | 0.068604 / 0.038508 (0.030096) | 0.083956 / 0.023109 (0.060847) | 0.402579 / 0.275898 (0.126681) | 0.444714 / 0.323480 (0.121235) | 0.005940 / 0.007986 (-0.002046) | 0.003607 / 0.004328 (-0.000722) | 0.073134 / 0.004250 (0.068883) | 0.061722 / 0.037052 (0.024669) | 0.410957 / 0.258489 (0.152468) | 0.458819 / 0.293841 (0.164978) | 0.033710 / 0.128546 (-0.094836) | 0.010230 / 0.075646 (-0.065417) | 0.084678 / 0.419271 (-0.334593) | 0.058203 / 0.043533 (0.014670) | 0.444972 / 0.255139 (0.189833) | 0.470962 / 0.283200 (0.187763) | 0.029222 / 0.141683 (-0.112461) | 1.671460 / 1.452155 (0.219306) | 1.759471 / 1.492716 (0.266754) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238894 / 0.018006 (0.220888) | 0.493605 / 0.000490 (0.493115) | 0.001979 / 0.000200 (0.001780) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036498 / 0.037411 (-0.000913) | 0.095245 / 0.014526 (0.080719) | 0.112147 / 0.176557 (-0.064409) | 0.171128 / 0.737135 (-0.566007) | 0.115295 / 0.296338 (-0.181044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461067 / 0.215209 (0.245858) | 4.723932 / 2.077655 (2.646277) | 2.432697 / 1.504120 (0.928578) | 2.237302 / 1.541195 (0.696107) | 2.351320 / 1.468490 (0.882830) | 0.509963 / 4.584777 (-4.074813) | 4.194817 / 3.745712 (0.449105) | 6.689529 / 5.269862 (1.419667) | 3.351198 / 4.565676 (-1.214478) | 0.064563 / 0.424275 (-0.359712) | 0.008605 / 0.007607 (0.000998) | 0.575590 / 0.226044 (0.349546) | 5.644179 / 2.268929 (3.375250) | 3.021375 / 55.444624 (-52.423249) | 2.595305 / 6.876477 (-4.281172) | 2.839228 / 2.142072 (0.697156) | 0.657148 / 4.805227 (-4.148079) | 0.144831 / 6.500664 (-6.355834) | 0.067882 / 0.075469 (-0.007587) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595580 / 1.841788 (-0.246208) | 22.431609 / 8.074308 (14.357301) | 15.700845 / 10.191392 (5.509453) | 0.164675 / 0.680424 (-0.515749) | 0.021322 / 0.534201 (-0.512879) | 0.455270 / 0.579283 (-0.124013) | 0.451547 / 0.434364 (0.017183) | 0.520955 / 0.540337 (-0.019383) | 0.687803 / 1.386936 (-0.699133) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7d19574e9f44bd3b59a3e47ca7c4ea66305a8e6b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008171 / 0.011353 (-0.003182) | 0.005563 / 0.011008 (-0.005445) | 0.102265 / 0.038508 (0.063757) | 0.074755 / 0.023109 (0.051646) | 0.431317 / 0.275898 (0.155419) | 0.472179 / 0.323480 (0.148699) | 0.006153 / 0.007986 (-0.001833) | 0.003832 / 0.004328 (-0.000496) | 0.078480 / 0.004250 (0.074230) | 0.056250 / 0.037052 (0.019197) | 0.432938 / 0.258489 (0.174449) | 0.480983 / 0.293841 (0.187142) | 0.048861 / 0.128546 (-0.079685) | 0.016252 / 0.075646 (-0.059394) | 0.343508 / 0.419271 (-0.075763) | 0.065057 / 0.043533 (0.021524) | 0.468418 / 0.255139 (0.213279) | 0.463692 / 0.283200 (0.180492) | 0.032912 / 0.141683 (-0.108771) | 1.795194 / 1.452155 (0.343039) | 1.833047 / 1.492716 (0.340331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197980 / 0.018006 (0.179974) | 0.500662 / 0.000490 (0.500172) | 0.007380 / 0.000200 (0.007181) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028323 / 0.037411 (-0.009089) | 0.089817 / 0.014526 (0.075291) | 0.102923 / 0.176557 (-0.073633) | 0.173851 / 0.737135 (-0.563284) | 0.104006 / 0.296338 (-0.192333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580277 / 0.215209 (0.365068) | 5.878739 / 2.077655 (3.801085) | 2.404673 / 1.504120 (0.900553) | 2.071765 / 1.541195 (0.530571) | 2.106024 / 1.468490 (0.637534) | 0.855217 / 4.584777 (-3.729560) | 4.918602 / 3.745712 (1.172890) | 5.354984 / 5.269862 (0.085122) | 3.141288 / 4.565676 (-1.424389) | 0.099553 / 0.424275 (-0.324723) | 0.008152 / 0.007607 (0.000545) | 0.709857 / 0.226044 (0.483813) | 7.144602 / 2.268929 (4.875673) | 3.137637 / 55.444624 (-52.306987) | 2.379851 / 6.876477 (-4.496626) | 2.346426 / 2.142072 (0.204353) | 1.033416 / 4.805227 (-3.771811) | 0.213120 / 6.500664 (-6.287544) | 0.076037 / 0.075469 (0.000568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.597742 / 1.841788 (-0.244046) | 21.745366 / 8.074308 (13.671058) | 20.830698 / 10.191392 (10.639306) | 0.238727 / 0.680424 (-0.441697) | 0.027923 / 0.534201 (-0.506278) | 0.466073 / 0.579283 (-0.113210) | 0.548647 / 0.434364 (0.114283) | 0.549245 / 0.540337 (0.008908) | 0.977148 / 1.386936 (-0.409788) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008252 / 0.011353 (-0.003101) | 0.004653 / 0.011008 (-0.006356) | 0.084012 / 0.038508 (0.045504) | 0.077418 / 0.023109 (0.054309) | 0.440748 / 0.275898 (0.164850) | 0.464279 / 0.323480 (0.140799) | 0.005762 / 0.007986 (-0.002224) | 0.004909 / 0.004328 (0.000581) | 0.086441 / 0.004250 (0.082190) | 0.057883 / 0.037052 (0.020831) | 0.466655 / 0.258489 (0.208166) | 0.479751 / 0.293841 (0.185910) | 0.047166 / 0.128546 (-0.081380) | 0.014480 / 0.075646 (-0.061166) | 0.092599 / 0.419271 (-0.326672) | 0.062454 / 0.043533 (0.018921) | 0.449753 / 0.255139 (0.194614) | 0.461876 / 0.283200 (0.178676) | 0.034828 / 0.141683 (-0.106855) | 1.752249 / 1.452155 (0.300095) | 1.865449 / 1.492716 (0.372732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245028 / 0.018006 (0.227022) | 0.509564 / 0.000490 (0.509074) | 0.003930 / 0.000200 (0.003730) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034746 / 0.037411 (-0.002665) | 0.096563 / 0.014526 (0.082037) | 0.107581 / 0.176557 (-0.068975) | 0.184952 / 0.737135 (-0.552184) | 0.108747 / 0.296338 (-0.187591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613091 / 0.215209 (0.397882) | 5.994985 / 2.077655 (3.917330) | 2.711276 / 1.504120 (1.207156) | 2.415862 / 1.541195 (0.874668) | 2.391055 / 1.468490 (0.922565) | 0.868723 / 4.584777 (-3.716054) | 4.953992 / 3.745712 (1.208280) | 4.606542 / 5.269862 (-0.663319) | 2.942162 / 4.565676 (-1.623515) | 0.102737 / 0.424275 (-0.321538) | 0.008634 / 0.007607 (0.001027) | 0.722122 / 0.226044 (0.496078) | 7.245097 / 2.268929 (4.976168) | 3.428232 / 55.444624 (-52.016393) | 2.709539 / 6.876477 (-4.166938) | 2.857956 / 2.142072 (0.715884) | 1.045594 / 4.805227 (-3.759634) | 0.213344 / 6.500664 (-6.287320) | 0.073601 / 0.075469 (-0.001868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651954 / 1.841788 (-0.189834) | 22.458646 / 8.074308 (14.384338) | 19.583203 / 10.191392 (9.391811) | 0.246932 / 0.680424 (-0.433492) | 0.025730 / 0.534201 (-0.508471) | 0.473475 / 0.579283 (-0.105808) | 0.521411 / 0.434364 (0.087047) | 0.562038 / 0.540337 (0.021700) | 0.767673 / 1.386936 (-0.619263) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3869d99628329c696f6975377f65e625dd8ef3e0 \"CML watermark\")\n", "The CI error is unrelated.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006649 / 0.011353 (-0.004703) | 0.003963 / 0.011008 (-0.007045) | 0.084564 / 0.038508 (0.046056) | 0.075668 / 0.023109 (0.052559) | 0.314233 / 0.275898 (0.038335) | 0.343320 / 0.323480 (0.019841) | 0.005405 / 0.007986 (-0.002581) | 0.003356 / 0.004328 (-0.000973) | 0.065094 / 0.004250 (0.060844) | 0.058774 / 0.037052 (0.021722) | 0.320772 / 0.258489 (0.062283) | 0.353546 / 0.293841 (0.059705) | 0.030921 / 0.128546 (-0.097625) | 0.008463 / 0.075646 (-0.067184) | 0.287490 / 0.419271 (-0.131781) | 0.053188 / 0.043533 (0.009656) | 0.324023 / 0.255139 (0.068884) | 0.337828 / 0.283200 (0.054628) | 0.024764 / 0.141683 (-0.116918) | 1.458028 / 1.452155 (0.005873) | 1.521615 / 1.492716 (0.028899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209360 / 0.018006 (0.191353) | 0.461331 / 0.000490 (0.460841) | 0.000386 / 0.000200 (0.000186) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028405 / 0.037411 (-0.009006) | 0.081074 / 0.014526 (0.066548) | 0.094868 / 0.176557 (-0.081689) | 0.151050 / 0.737135 (-0.586085) | 0.095854 / 0.296338 (-0.200484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393957 / 0.215209 (0.178748) | 3.938649 / 2.077655 (1.860994) | 1.938190 / 1.504120 (0.434070) | 1.766458 / 1.541195 (0.225263) | 1.818028 / 1.468490 (0.349538) | 0.483926 / 4.584777 (-4.100851) | 3.641957 / 3.745712 (-0.103755) | 4.883845 / 5.269862 (-0.386016) | 2.960300 / 4.565676 (-1.605377) | 0.057227 / 0.424275 (-0.367048) | 0.007285 / 0.007607 (-0.000322) | 0.475928 / 0.226044 (0.249884) | 4.756757 / 2.268929 (2.487828) | 2.502659 / 55.444624 (-52.941966) | 2.178067 / 6.876477 (-4.698410) | 2.378298 / 2.142072 (0.236226) | 0.578639 / 4.805227 (-4.226588) | 0.132512 / 6.500664 (-6.368152) | 0.059656 / 0.075469 (-0.015813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272673 / 1.841788 (-0.569115) | 19.266884 / 8.074308 (11.192576) | 14.272930 / 10.191392 (4.081538) | 0.165897 / 0.680424 (-0.514527) | 0.018436 / 0.534201 (-0.515765) | 0.395177 / 0.579283 (-0.184107) | 0.420134 / 0.434364 (-0.014229) | 0.460781 / 0.540337 (-0.079557) | 0.645376 / 1.386936 (-0.741560) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003942 / 0.011008 (-0.007066) | 0.064936 / 0.038508 (0.026428) | 0.075015 / 0.023109 (0.051905) | 0.396871 / 0.275898 (0.120973) | 0.423448 / 0.323480 (0.099968) | 0.005239 / 0.007986 (-0.002747) | 0.003265 / 0.004328 (-0.001063) | 0.064910 / 0.004250 (0.060660) | 0.055006 / 0.037052 (0.017953) | 0.392818 / 0.258489 (0.134329) | 0.429735 / 0.293841 (0.135894) | 0.031847 / 0.128546 (-0.096699) | 0.008626 / 0.075646 (-0.067021) | 0.071591 / 0.419271 (-0.347681) | 0.049006 / 0.043533 (0.005473) | 0.384913 / 0.255139 (0.129774) | 0.408969 / 0.283200 (0.125769) | 0.023573 / 0.141683 (-0.118110) | 1.490271 / 1.452155 (0.038117) | 1.564620 / 1.492716 (0.071904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225917 / 0.018006 (0.207911) | 0.450369 / 0.000490 (0.449880) | 0.000375 / 0.000200 (0.000175) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031196 / 0.037411 (-0.006215) | 0.090486 / 0.014526 (0.075960) | 0.102326 / 0.176557 (-0.074231) | 0.157483 / 0.737135 (-0.579653) | 0.103670 / 0.296338 (-0.192668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417577 / 0.215209 (0.202368) | 4.170798 / 2.077655 (2.093143) | 2.123689 / 1.504120 (0.619569) | 1.948231 / 1.541195 (0.407037) | 2.040277 / 1.468490 (0.571787) | 0.497919 / 4.584777 (-4.086858) | 3.633270 / 3.745712 (-0.112442) | 4.851698 / 5.269862 (-0.418164) | 2.691992 / 4.565676 (-1.873684) | 0.058641 / 0.424275 (-0.365634) | 0.007719 / 0.007607 (0.000112) | 0.500652 / 0.226044 (0.274607) | 4.988657 / 2.268929 (2.719728) | 2.604488 / 55.444624 (-52.840136) | 2.329829 / 6.876477 (-4.546648) | 2.468239 / 2.142072 (0.326167) | 0.598724 / 4.805227 (-4.206503) | 0.135959 / 6.500664 (-6.364706) | 0.061088 / 0.075469 (-0.014381) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352107 / 1.841788 (-0.489681) | 19.973976 / 8.074308 (11.899668) | 14.292812 / 10.191392 (4.101420) | 0.163855 / 0.680424 (-0.516568) | 0.018402 / 0.534201 (-0.515799) | 0.393128 / 0.579283 (-0.186155) | 0.407379 / 0.434364 (-0.026985) | 0.462324 / 0.540337 (-0.078013) | 0.607501 / 1.386936 (-0.779435) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ae126ac974cad3050f90106e5909232140786811 \"CML watermark\")\n" ]
2023-07-24T12:33:58
2023-07-24T14:13:20
2023-07-24T14:04:36
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6061", "html_url": "https://github.com/huggingface/datasets/pull/6061", "diff_url": "https://github.com/huggingface/datasets/pull/6061.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6061.patch", "merged_at": "2023-07-24T14:04:36" }
Adds support for dill 3.7.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6061/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6060/comments
https://api.github.com/repos/huggingface/datasets/issues/6060/events
https://github.com/huggingface/datasets/issues/6060
1,816,614,120
I_kwDODunzps5sR1To
6,060
Dataset.map() execute twice when in PyTorch DDP mode
{ "login": "wanghaoyucn", "id": 39429965, "node_id": "MDQ6VXNlcjM5NDI5OTY1", "avatar_url": "https://avatars.githubusercontent.com/u/39429965?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wanghaoyucn", "html_url": "https://github.com/wanghaoyucn", "followers_url": "https://api.github.com/users/wanghaoyucn/followers", "following_url": "https://api.github.com/users/wanghaoyucn/following{/other_user}", "gists_url": "https://api.github.com/users/wanghaoyucn/gists{/gist_id}", "starred_url": "https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanghaoyucn/subscriptions", "organizations_url": "https://api.github.com/users/wanghaoyucn/orgs", "repos_url": "https://api.github.com/users/wanghaoyucn/repos", "events_url": "https://api.github.com/users/wanghaoyucn/events{/privacy}", "received_events_url": "https://api.github.com/users/wanghaoyucn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or-fairscale/7229/7) didn't help.", "If it does the `map` twice then it means the hash of your map function is not some same between your two processes.\r\n\r\nCan you make sure your map functions have the same hash in different processes ?\r\n\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nprint(Hasher.hash(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True)))\r\nprint(Hasher.hash(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16)))\r\n```\r\n\r\nYou can also set the fingerprint used to reload the resulting dataset by passing `new_finegrprint=` in `map`, see https://huggingface.co/docs/datasets/v2.13.1/en/about_cache#the-cache. This will force the different processes to use the same fingerprint used to locate the resulting dataset in the cache.", "Thanks for help! I find the fingerprint between processes don't have same hash:\r\n```\r\nRank 0: Gpu 0 cut_reorder_keys fingerprint c7f47f40e9a67657\r\nRank 0: Gpu 0 random_shift fingerprint 240a0ce79831e7d4\r\n\r\nRank 1: Gpu 1 cut_reorder_keys fingerprint 20edd3d9cf284001\r\nRank 1: Gpu 1 random_shift fingerprint 819f7c1c18e7733f\r\n```\r\nBut my functions only process the example one by one and don't need rank or other arguments. After all it can work in the test for dataset and dataloader.\r\nI'll try to set `new_fingerprint` to see if it works and figure out the reason of different hash." ]
2023-07-22T05:06:43
2023-07-24T19:29:55
null
NONE
null
null
null
### Describe the bug I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same. And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither. I have tried to use `rank` and `local_rank` to check, they all didn't make sense. ### Steps to reproduce the bug use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run This is my code: ```python if args.distributed and world_size > 1: if args.local_rank > 0: print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True) torch.distributed.barrier() print("Mapping dataset") dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys") dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift") dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys") if args.local_rank == 0: print("Mapping finished, loading results from main process") torch.distributed.barrier() ``` ### Expected behavior Only the main process will execute `map`, while the sub process will load cache from disk. ### Environment info server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090 - `python==3.9.16` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `22.04.1-Ubuntu` server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090 - `python==3.9.0` - `datasets==2.13.1` - `torch==2.0.1+cu117` - `Ubuntu 20.04`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6060/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6059/comments
https://api.github.com/repos/huggingface/datasets/issues/6059/events
https://github.com/huggingface/datasets/issues/6059
1,816,537,176
I_kwDODunzps5sRihY
6,059
Provide ability to load label mappings from file
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "repos_url": "https://api.github.com/users/david-waterworth/repos", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-07-22T02:04:19
2023-07-22T02:04:19
null
NONE
null
null
null
### Feature request My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file. It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo` I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred. The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it. ``` class TestDatasetBuilder(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "text": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["label_1", "label_2"]), } ), task_templates=[TextClassification(text_column="text", label_column="label")], ) def _split_generators(self, dl_manager): train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL) test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}), ] def _generate_examples(self, filepath): """Generate AG News examples.""" with open(filepath, encoding="utf-8") as csv_file: csv_reader = csv.DictReader(csv_file) for id_, row in enumerate(csv_reader): yield id_, row ``` ### Motivation Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset. ### Your contribution I'm willing to work on a PR with guidence.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6059/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6058/comments
https://api.github.com/repos/huggingface/datasets/issues/6058/events
https://github.com/huggingface/datasets/issues/6058
1,815,131,397
I_kwDODunzps5sMLUF
6,058
laion-coco download error
{ "login": "yangyijune", "id": 54424110, "node_id": "MDQ6VXNlcjU0NDI0MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/54424110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangyijune", "html_url": "https://github.com/yangyijune", "followers_url": "https://api.github.com/users/yangyijune/followers", "following_url": "https://api.github.com/users/yangyijune/following{/other_user}", "gists_url": "https://api.github.com/users/yangyijune/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangyijune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangyijune/subscriptions", "organizations_url": "https://api.github.com/users/yangyijune/orgs", "repos_url": "https://api.github.com/users/yangyijune/repos", "events_url": "https://api.github.com/users/yangyijune/events{/privacy}", "received_events_url": "https://api.github.com/users/yangyijune/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid Parquet files, so I don't think this is a bug on their side)\r\n" ]
2023-07-21T04:24:15
2023-07-22T01:42:06
2023-07-22T01:42:06
NONE
null
null
null
### Describe the bug The full trace: ``` /home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0. You can remove this warning by passing 'verification_mode=no_checks' instead. warnings.warn( Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion-- laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd... Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00, Downloading data files: 100%|█| 1/1 [04:59<00:00, 2 Extracting data files: 100%|█| 1/1 [00:00<00:00, 13 Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader name='/home/bian/.cache/huggingface/datasets/downlo ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154 0cb9234572927f04c'> Traceback (most recent call last): File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module> dataset = load_dataset("laion/laion-coco", ignore_verifications=True) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single generator = self._generate_tables(**gen_kwargs) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file . ``` I have carefully followed the instructions in #5264 but still get the same error. Other helpful information: ``` ds = load_dataset("parquet", data_files= ...: "https://huggingface.co/datasets/laion/l ...: aion-coco/resolve/d22869de3ccd39dfec1507 ...: f7ded32e4a518dad24/part-00000-2256f782-1 ...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p ...: arquet") Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f) 100%|██████████████| 1/1 [00:00<00:00, 4.55it/s] ``` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False) ``` ### Expected behavior Properly load Laion-coco dataset ### Environment info datasets==2.11.0 torch==1.12.1 python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6058/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6057/comments
https://api.github.com/repos/huggingface/datasets/issues/6057/events
https://github.com/huggingface/datasets/issues/6057
1,815,100,151
I_kwDODunzps5sMDr3
6,057
Why is the speed difference of gen example so big?
{ "login": "pixeli99", "id": 46072190, "node_id": "MDQ6VXNlcjQ2MDcyMTkw", "avatar_url": "https://avatars.githubusercontent.com/u/46072190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pixeli99", "html_url": "https://github.com/pixeli99", "followers_url": "https://api.github.com/users/pixeli99/followers", "following_url": "https://api.github.com/users/pixeli99/following{/other_user}", "gists_url": "https://api.github.com/users/pixeli99/gists{/gist_id}", "starred_url": "https://api.github.com/users/pixeli99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pixeli99/subscriptions", "organizations_url": "https://api.github.com/users/pixeli99/orgs", "repos_url": "https://api.github.com/users/pixeli99/repos", "events_url": "https://api.github.com/users/pixeli99/events{/privacy}", "received_events_url": "https://api.github.com/users/pixeli99/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi!\r\n\r\nIt's hard to explain this behavior without more information. Can you profile the slower version with the following code\r\n```python\r\nimport cProfile, pstats\r\nfrom datasets import load_dataset\r\n\r\nwith cProfile.Profile() as profiler:\r\n ds = load_dataset(...)\r\n\r\nstats = pstats.Stats(profiler).sort_stats(\"cumtime\")\r\nstats.print_stats()\r\n```\r\nand share the output?" ]
2023-07-21T03:34:49
2023-07-21T16:41:09
null
NONE
null
null
null
```python def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir): with open(metadata_path, 'r') as file: metadata = json.load(file) for idx, item in enumerate(metadata): image_path = item.get('image_path') text_content = item.get('text_content') image_data = open(image_path, "rb").read() yield idx, { "text": text_content, "image": { "path": image_path, "bytes": image_data, }, "conditioning_image": { "path": image_path, "bytes": image_data, }, } ``` Hello, I use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.** ![image](https://github.com/huggingface/datasets/assets/46072190/cdc17661-8267-4fd8-b30c-b74d505efd9b) I'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6057/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6056/comments
https://api.github.com/repos/huggingface/datasets/issues/6056/events
https://github.com/huggingface/datasets/pull/6056
1,815,086,963
PR_kwDODunzps5WD4RY
6,056
Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded
{ "login": "AntreasAntoniou", "id": 10792502, "node_id": "MDQ6VXNlcjEwNzkyNTAy", "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntreasAntoniou", "html_url": "https://github.com/AntreasAntoniou", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6056). All of your documentation changes will be reflected on that endpoint.", "@lhoestq Reading the filenames is something I tried earlier, but I decided to use the yaml direction because:\r\n\r\n1. The yaml file name is constructed to retain information about the shard_size, and total number of shards, hence ensuring that the files uploaded are not just files that have the same name but actually represent a different configuration of shard_size, and total number of shards. \r\n2. Remembering the total file size is done easily in the yaml, whereas alternatively I am not sure how one could access the file size of the uploaded files without downloading them.\r\n3. I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it. \r\n\r\nIf 1 and 2 can be achieved without an additional yaml, then I would be willing to make those changes. Let me know of any ideas. 1. could be done by changing the data file names, but I'd rather not do that as to prevent breaking existing datasets that try to upload updates to their data. ", "If the file name depends on the shard's fingerprint **before** mapping then we can know if a shard has been uploaded before mapping and without requiring an extra YAML file. It should do the job imo\r\n\r\n> I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it.\r\n\r\nwhat was the issue ?", "> If the file name depends on the shard's fingerprint **before** mapping then we can know if a shard has been uploaded before mapping and without requiring an extra YAML file. It should do the job imo\r\n> \r\n> > I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it.\r\n> \r\n> what was the issue ?\r\n\r\nYou are right. I was having some other issue earlier that I need more input from you guys to overcome, and when I overcame it the yaml was just legacy from before. I'll update the PR. ", "> If the file name depends on the shard's fingerprint **before** mapping then we can know if a shard has been uploaded before mapping and without requiring an extra YAML file. It should do the job imo\r\n> \r\n> > I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it.\r\n> \r\n> what was the issue ?\r\n\r\nI remembered what it was, and why I needed the yaml. I needed it so it could remember the progress for a particular num_shards setup, as different num_shards would lead to different number of splits, and a user might switch between them while uploading, and I did not want the index to be conflated with one of another num_shards setup. \r\n\r\nAny idea how we deal with that without a yaml?", "If the user changes the num_shards parameters then we should re-upload everything.\r\n\r\nIt happens that the num_shards is part of the parquet file names, so it restarts the upload from scratch without having to write additional logic :)" ]
2023-07-21T03:13:21
2023-08-17T08:26:53
null
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6056", "html_url": "https://github.com/huggingface/datasets/pull/6056", "diff_url": "https://github.com/huggingface/datasets/pull/6056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6056.patch", "merged_at": null }
Context: issue #5990 In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get the push_to_hub function to retrieve on demand past history of uploads and continue mapping and uploading from where it was left off.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6056/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6055/comments
https://api.github.com/repos/huggingface/datasets/issues/6055/events
https://github.com/huggingface/datasets/issues/6055
1,813,524,145
I_kwDODunzps5sGC6x
6,055
Fix host URL in The Pile datasets
{ "login": "nickovchinnikov", "id": 7540752, "node_id": "MDQ6VXNlcjc1NDA3NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickovchinnikov", "html_url": "https://github.com/nickovchinnikov", "followers_url": "https://api.github.com/users/nickovchinnikov/followers", "following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}", "gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions", "organizations_url": "https://api.github.com/users/nickovchinnikov/orgs", "repos_url": "https://api.github.com/users/nickovchinnikov/repos", "events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}", "received_events_url": "https://api.github.com/users/nickovchinnikov/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-20T09:08:52
2023-07-20T09:09:37
null
NONE
null
null
null
### Describe the bug In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now: `HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst` And `ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))` ### Steps to reproduce the bug ``` from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` Result: `ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))` And ``` from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` Result: `HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst` ### Expected behavior Downloading as normal. ### Environment info Environment info `datasets` version: 2.9.0 Platform: Windows Python version: 3.9.13
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6055/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6054/comments
https://api.github.com/repos/huggingface/datasets/issues/6054/events
https://github.com/huggingface/datasets/issues/6054
1,813,271,304
I_kwDODunzps5sFFMI
6,054
Multi-processed `Dataset.map` slows down a lot when `import torch`
{ "login": "ShinoharaHare", "id": 47121592, "node_id": "MDQ6VXNlcjQ3MTIxNTky", "avatar_url": "https://avatars.githubusercontent.com/u/47121592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShinoharaHare", "html_url": "https://github.com/ShinoharaHare", "followers_url": "https://api.github.com/users/ShinoharaHare/followers", "following_url": "https://api.github.com/users/ShinoharaHare/following{/other_user}", "gists_url": "https://api.github.com/users/ShinoharaHare/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShinoharaHare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShinoharaHare/subscriptions", "organizations_url": "https://api.github.com/users/ShinoharaHare/orgs", "repos_url": "https://api.github.com/users/ShinoharaHare/repos", "events_url": "https://api.github.com/users/ShinoharaHare/events{/privacy}", "received_events_url": "https://api.github.com/users/ShinoharaHare/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/5929" ]
2023-07-20T06:36:14
2023-07-21T15:19:37
2023-07-21T15:19:37
NONE
null
null
null
### Describe the bug When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it. I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result. BTW, `import lightning` also slows it down. Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times. - without `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/0233055a-ced4-424a-9f0f-32a2afd802c2) - with `import torch` ![image](https://github.com/huggingface/datasets/assets/47121592/463eafb7-b81e-4eb9-91ca-fd7fe20f3d59) ### Steps to reproduce the bug Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon. ```python3 from datasets import load_from_disk, disable_caching from transformers import AutoTokenizer # import torch # import lightning def rearrange_datapoints( batch, tokenizer, sequence_length, ): datapoints = [] input_ids = [] for x in batch['input_ids']: input_ids += x while len(input_ids) >= sequence_length: datapoint = input_ids[:sequence_length] datapoints.append(datapoint) input_ids[:sequence_length] = [] if input_ids: paddings = [-1] * (sequence_length - len(input_ids)) datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings datapoints.append(datapoint) batch['input_ids'] = datapoints return batch if __name__ == '__main__': disable_caching() tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False) dataset = load_from_disk('...') dataset = dataset.map( rearrange_datapoints, fn_kwargs=dict( tokenizer=tokenizer, sequence_length=2048, ), batched=True, num_proc=8, ) ``` ### Expected behavior The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6054/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6053/comments
https://api.github.com/repos/huggingface/datasets/issues/6053/events
https://github.com/huggingface/datasets/issues/6053
1,812,635,902
I_kwDODunzps5sCqD-
6,053
Change package name from "datasets" to something less generic
{ "login": "geajack", "id": 2124157, "node_id": "MDQ6VXNlcjIxMjQxNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2124157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geajack", "html_url": "https://github.com/geajack", "followers_url": "https://api.github.com/users/geajack/followers", "following_url": "https://api.github.com/users/geajack/following{/other_user}", "gists_url": "https://api.github.com/users/geajack/gists{/gist_id}", "starred_url": "https://api.github.com/users/geajack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geajack/subscriptions", "organizations_url": "https://api.github.com/users/geajack/orgs", "repos_url": "https://api.github.com/users/geajack/repos", "events_url": "https://api.github.com/users/geajack/events{/privacy}", "received_events_url": "https://api.github.com/users/geajack/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-07-19T19:53:28
2023-07-19T19:55:04
null
NONE
null
null
null
### Feature request I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude. My preference would be a pattern like what you get with all the other big libraries like numpy or pandas: ``` import huggingface as hf # hf.transformers, hf.datasets, hf.evaluate ``` or things like ``` import huggingface.transformers as tf # tf.load_model(), etc ``` If this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on. I realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this. Sorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like "package name". Sister issues: - [transformers](https://github.com/huggingface/transformers/issues/24934) - **datasets** - [evaluate](https://github.com/huggingface/evaluate/issues/476) ### Motivation Not taking up package names the user is likely to want to use. ### Your contribution No - more a matter of internal discussion among core library authors.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6053/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6052/comments
https://api.github.com/repos/huggingface/datasets/issues/6052/events
https://github.com/huggingface/datasets/pull/6052
1,812,145,100
PR_kwDODunzps5V5yOi
6,052
Remove `HfFileSystem` and deprecate `S3FileSystem`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006658 / 0.011353 (-0.004695) | 0.004347 / 0.011008 (-0.006661) | 0.084179 / 0.038508 (0.045671) | 0.080842 / 0.023109 (0.057733) | 0.321642 / 0.275898 (0.045744) | 0.348758 / 0.323480 (0.025278) | 0.005624 / 0.007986 (-0.002362) | 0.003479 / 0.004328 (-0.000850) | 0.065125 / 0.004250 (0.060875) | 0.057624 / 0.037052 (0.020572) | 0.323643 / 0.258489 (0.065154) | 0.360939 / 0.293841 (0.067098) | 0.031005 / 0.128546 (-0.097541) | 0.008618 / 0.075646 (-0.067028) | 0.287443 / 0.419271 (-0.131828) | 0.052640 / 0.043533 (0.009107) | 0.316947 / 0.255139 (0.061808) | 0.330292 / 0.283200 (0.047093) | 0.024393 / 0.141683 (-0.117289) | 1.476734 / 1.452155 (0.024579) | 1.534505 / 1.492716 (0.041789) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273808 / 0.018006 (0.255802) | 0.591146 / 0.000490 (0.590656) | 0.000322 / 0.000200 (0.000122) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029992 / 0.037411 (-0.007419) | 0.086654 / 0.014526 (0.072129) | 0.098590 / 0.176557 (-0.077967) | 0.157225 / 0.737135 (-0.579910) | 0.101816 / 0.296338 (-0.194522) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382578 / 0.215209 (0.167368) | 3.803576 / 2.077655 (1.725922) | 1.875136 / 1.504120 (0.371016) | 1.704207 / 1.541195 (0.163012) | 1.765146 / 1.468490 (0.296656) | 0.482802 / 4.584777 (-4.101975) | 3.571772 / 3.745712 (-0.173940) | 3.245626 / 5.269862 (-2.024235) | 2.051612 / 4.565676 (-2.514064) | 0.056539 / 0.424275 (-0.367736) | 0.007199 / 0.007607 (-0.000408) | 0.462445 / 0.226044 (0.236401) | 4.623800 / 2.268929 (2.354872) | 2.318948 / 55.444624 (-53.125677) | 1.971442 / 6.876477 (-4.905035) | 2.225444 / 2.142072 (0.083371) | 0.575205 / 4.805227 (-4.230022) | 0.129243 / 6.500664 (-6.371421) | 0.059036 / 0.075469 (-0.016433) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266827 / 1.841788 (-0.574960) | 20.323419 / 8.074308 (12.249110) | 14.577603 / 10.191392 (4.386210) | 0.162131 / 0.680424 (-0.518293) | 0.018529 / 0.534201 (-0.515672) | 0.395046 / 0.579283 (-0.184237) | 0.410870 / 0.434364 (-0.023494) | 0.455782 / 0.540337 (-0.084556) | 0.662851 / 1.386936 (-0.724085) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006867 / 0.011353 (-0.004486) | 0.004197 / 0.011008 (-0.006811) | 0.066060 / 0.038508 (0.027552) | 0.084145 / 0.023109 (0.061036) | 0.366740 / 0.275898 (0.090842) | 0.402362 / 0.323480 (0.078882) | 0.005785 / 0.007986 (-0.002200) | 0.003551 / 0.004328 (-0.000778) | 0.066177 / 0.004250 (0.061926) | 0.061521 / 0.037052 (0.024468) | 0.377807 / 0.258489 (0.119318) | 0.413490 / 0.293841 (0.119649) | 0.031918 / 0.128546 (-0.096628) | 0.008767 / 0.075646 (-0.066879) | 0.071437 / 0.419271 (-0.347835) | 0.049237 / 0.043533 (0.005704) | 0.365929 / 0.255139 (0.110790) | 0.393545 / 0.283200 (0.110346) | 0.024054 / 0.141683 (-0.117628) | 1.524599 / 1.452155 (0.072445) | 1.576592 / 1.492716 (0.083876) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.315181 / 0.018006 (0.297174) | 0.535501 / 0.000490 (0.535011) | 0.000410 / 0.000200 (0.000210) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032915 / 0.037411 (-0.004497) | 0.089310 / 0.014526 (0.074784) | 0.105136 / 0.176557 (-0.071421) | 0.158572 / 0.737135 (-0.578563) | 0.106850 / 0.296338 (-0.189489) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419343 / 0.215209 (0.204134) | 4.200166 / 2.077655 (2.122511) | 2.180234 / 1.504120 (0.676114) | 2.016885 / 1.541195 (0.475690) | 2.131480 / 1.468490 (0.662990) | 0.484681 / 4.584777 (-4.100096) | 3.613535 / 3.745712 (-0.132177) | 5.762111 / 5.269862 (0.492249) | 3.190590 / 4.565676 (-1.375086) | 0.057403 / 0.424275 (-0.366872) | 0.007862 / 0.007607 (0.000255) | 0.490857 / 0.226044 (0.264813) | 4.911241 / 2.268929 (2.642313) | 2.650787 / 55.444624 (-52.793838) | 2.317060 / 6.876477 (-4.559416) | 2.579677 / 2.142072 (0.437605) | 0.587388 / 4.805227 (-4.217840) | 0.148109 / 6.500664 (-6.352555) | 0.061435 / 0.075469 (-0.014034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322181 / 1.841788 (-0.519606) | 20.647184 / 8.074308 (12.572875) | 14.907635 / 10.191392 (4.716243) | 0.156330 / 0.680424 (-0.524094) | 0.018719 / 0.534201 (-0.515482) | 0.397636 / 0.579283 (-0.181647) | 0.414107 / 0.434364 (-0.020257) | 0.460812 / 0.540337 (-0.079526) | 0.609568 / 1.386936 (-0.777368) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#74398c95b81a08a51457a2bef56efb7e608bded2 \"CML watermark\")\n", "This would mean when i update my examples to newer datasets version i need to make a change right? nothing backward breaking? ", "what would be the change i need to make? ", "@philschmid You just need to replace the occurrences of `datasets.filesystems.S3FileSystem` with `s3fs.S3FileSystem`. From the moment it was added until now, `datasets.filesystems.S3FileSystem` is a \"dummy\" subclass of `s3fs.S3FileSystem` that only changes its docstring.\r\n\r\n\r\n", "The CI is failing because I updated the YAML validation for https://github.com/huggingface/datasets/pull/6044.\r\nIt will be fixed once https://github.com/huggingface/datasets/pull/6044 is merged", "I just merged the other PR so you should be good now", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006303 / 0.011353 (-0.005049) | 0.003746 / 0.011008 (-0.007262) | 0.081083 / 0.038508 (0.042575) | 0.067973 / 0.023109 (0.044864) | 0.322221 / 0.275898 (0.046323) | 0.359432 / 0.323480 (0.035952) | 0.004891 / 0.007986 (-0.003095) | 0.002988 / 0.004328 (-0.001341) | 0.064068 / 0.004250 (0.059818) | 0.052042 / 0.037052 (0.014990) | 0.323387 / 0.258489 (0.064898) | 0.390416 / 0.293841 (0.096575) | 0.028090 / 0.128546 (-0.100457) | 0.008009 / 0.075646 (-0.067638) | 0.262288 / 0.419271 (-0.156984) | 0.044986 / 0.043533 (0.001453) | 0.322319 / 0.255139 (0.067180) | 0.345323 / 0.283200 (0.062123) | 0.021798 / 0.141683 (-0.119885) | 1.417259 / 1.452155 (-0.034895) | 1.490050 / 1.492716 (-0.002667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195902 / 0.018006 (0.177896) | 0.490808 / 0.000490 (0.490318) | 0.002969 / 0.000200 (0.002770) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025221 / 0.037411 (-0.012190) | 0.075341 / 0.014526 (0.060815) | 0.086703 / 0.176557 (-0.089853) | 0.146953 / 0.737135 (-0.590182) | 0.086610 / 0.296338 (-0.209728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434890 / 0.215209 (0.219681) | 4.352283 / 2.077655 (2.274629) | 2.293098 / 1.504120 (0.788979) | 2.123023 / 1.541195 (0.581829) | 2.179722 / 1.468490 (0.711232) | 0.503851 / 4.584777 (-4.080926) | 3.087991 / 3.745712 (-0.657721) | 2.898689 / 5.269862 (-2.371173) | 1.902813 / 4.565676 (-2.662864) | 0.058079 / 0.424275 (-0.366196) | 0.006600 / 0.007607 (-0.001007) | 0.509243 / 0.226044 (0.283199) | 5.080204 / 2.268929 (2.811275) | 2.753594 / 55.444624 (-52.691030) | 2.417385 / 6.876477 (-4.459091) | 2.635470 / 2.142072 (0.493398) | 0.591059 / 4.805227 (-4.214168) | 0.126360 / 6.500664 (-6.374304) | 0.062108 / 0.075469 (-0.013361) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254398 / 1.841788 (-0.587390) | 18.866729 / 8.074308 (10.792420) | 14.120008 / 10.191392 (3.928616) | 0.152388 / 0.680424 (-0.528035) | 0.016997 / 0.534201 (-0.517204) | 0.336435 / 0.579283 (-0.242848) | 0.364612 / 0.434364 (-0.069752) | 0.391434 / 0.540337 (-0.148903) | 0.567180 / 1.386936 (-0.819756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006477 / 0.011353 (-0.004876) | 0.003723 / 0.011008 (-0.007285) | 0.062712 / 0.038508 (0.024204) | 0.069380 / 0.023109 (0.046271) | 0.393394 / 0.275898 (0.117496) | 0.446903 / 0.323480 (0.123423) | 0.004833 / 0.007986 (-0.003153) | 0.002946 / 0.004328 (-0.001382) | 0.062076 / 0.004250 (0.057826) | 0.051589 / 0.037052 (0.014537) | 0.388536 / 0.258489 (0.130047) | 0.451406 / 0.293841 (0.157565) | 0.027824 / 0.128546 (-0.100722) | 0.008040 / 0.075646 (-0.067606) | 0.067085 / 0.419271 (-0.352187) | 0.042269 / 0.043533 (-0.001264) | 0.363419 / 0.255139 (0.108280) | 0.435201 / 0.283200 (0.152001) | 0.021275 / 0.141683 (-0.120408) | 1.439838 / 1.452155 (-0.012316) | 1.477279 / 1.492716 (-0.015437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229667 / 0.018006 (0.211661) | 0.434101 / 0.000490 (0.433611) | 0.000652 / 0.000200 (0.000452) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026141 / 0.037411 (-0.011271) | 0.078950 / 0.014526 (0.064424) | 0.090143 / 0.176557 (-0.086413) | 0.143941 / 0.737135 (-0.593195) | 0.090465 / 0.296338 (-0.205873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432042 / 0.215209 (0.216833) | 4.322134 / 2.077655 (2.244479) | 2.242897 / 1.504120 (0.738777) | 2.076351 / 1.541195 (0.535157) | 2.166739 / 1.468490 (0.698249) | 0.500833 / 4.584777 (-4.083944) | 3.140117 / 3.745712 (-0.605595) | 4.383050 / 5.269862 (-0.886812) | 2.548245 / 4.565676 (-2.017432) | 0.057521 / 0.424275 (-0.366754) | 0.006946 / 0.007607 (-0.000662) | 0.509613 / 0.226044 (0.283569) | 5.114052 / 2.268929 (2.845123) | 2.682112 / 55.444624 (-52.762512) | 2.362385 / 6.876477 (-4.514092) | 2.531787 / 2.142072 (0.389715) | 0.595085 / 4.805227 (-4.210142) | 0.130198 / 6.500664 (-6.370466) | 0.064057 / 0.075469 (-0.011412) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346254 / 1.841788 (-0.495534) | 19.036911 / 8.074308 (10.962603) | 14.478689 / 10.191392 (4.287297) | 0.147541 / 0.680424 (-0.532883) | 0.016851 / 0.534201 (-0.517350) | 0.333901 / 0.579283 (-0.245382) | 0.380012 / 0.434364 (-0.054352) | 0.396021 / 0.540337 (-0.144317) | 0.540612 / 1.386936 (-0.846324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#02dd4ccaf7971cd71d658ce9f62bc0c5cfc1e3ad \"CML watermark\")\n", "CI failure is unrelated. Merging.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009498 / 0.011353 (-0.001855) | 0.005639 / 0.011008 (-0.005369) | 0.108731 / 0.038508 (0.070223) | 0.094052 / 0.023109 (0.070943) | 0.454375 / 0.275898 (0.178477) | 0.486852 / 0.323480 (0.163372) | 0.006627 / 0.007986 (-0.001359) | 0.004712 / 0.004328 (0.000383) | 0.082006 / 0.004250 (0.077756) | 0.079394 / 0.037052 (0.042342) | 0.450982 / 0.258489 (0.192493) | 0.502659 / 0.293841 (0.208818) | 0.049741 / 0.128546 (-0.078806) | 0.014482 / 0.075646 (-0.061164) | 0.362661 / 0.419271 (-0.056611) | 0.068225 / 0.043533 (0.024692) | 0.456219 / 0.255139 (0.201080) | 0.483919 / 0.283200 (0.200719) | 0.044490 / 0.141683 (-0.097193) | 1.809420 / 1.452155 (0.357265) | 1.908859 / 1.492716 (0.416143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267350 / 0.018006 (0.249344) | 0.600403 / 0.000490 (0.599913) | 0.003665 / 0.000200 (0.003465) | 0.000162 / 0.000054 (0.000107) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032499 / 0.037411 (-0.004912) | 0.104829 / 0.014526 (0.090303) | 0.115809 / 0.176557 (-0.060747) | 0.191561 / 0.737135 (-0.545574) | 0.113454 / 0.296338 (-0.182885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.599165 / 0.215209 (0.383956) | 5.802947 / 2.077655 (3.725292) | 2.477330 / 1.504120 (0.973210) | 2.231147 / 1.541195 (0.689952) | 2.365688 / 1.468490 (0.897197) | 0.853912 / 4.584777 (-3.730865) | 5.529472 / 3.745712 (1.783760) | 6.145286 / 5.269862 (0.875424) | 3.415871 / 4.565676 (-1.149805) | 0.099889 / 0.424275 (-0.324386) | 0.008933 / 0.007607 (0.001325) | 0.704643 / 0.226044 (0.478598) | 7.178101 / 2.268929 (4.909173) | 3.367120 / 55.444624 (-52.077504) | 2.795177 / 6.876477 (-4.081300) | 2.796798 / 2.142072 (0.654726) | 1.039097 / 4.805227 (-3.766130) | 0.232784 / 6.500664 (-6.267881) | 0.083608 / 0.075469 (0.008138) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.646827 / 1.841788 (-0.194961) | 25.003419 / 8.074308 (16.929111) | 22.165746 / 10.191392 (11.974354) | 0.246179 / 0.680424 (-0.434245) | 0.029304 / 0.534201 (-0.504897) | 0.500767 / 0.579283 (-0.078516) | 0.606501 / 0.434364 (0.172137) | 0.564092 / 0.540337 (0.023755) | 0.857568 / 1.386936 (-0.529368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009206 / 0.011353 (-0.002146) | 0.005084 / 0.011008 (-0.005925) | 0.081402 / 0.038508 (0.042894) | 0.088028 / 0.023109 (0.064919) | 0.539509 / 0.275898 (0.263611) | 0.590759 / 0.323480 (0.267280) | 0.006527 / 0.007986 (-0.001459) | 0.004375 / 0.004328 (0.000047) | 0.082327 / 0.004250 (0.078076) | 0.065442 / 0.037052 (0.028390) | 0.548254 / 0.258489 (0.289765) | 0.598388 / 0.293841 (0.304547) | 0.049409 / 0.128546 (-0.079137) | 0.014366 / 0.075646 (-0.061280) | 0.094568 / 0.419271 (-0.324703) | 0.063685 / 0.043533 (0.020152) | 0.545359 / 0.255139 (0.290220) | 0.573358 / 0.283200 (0.290159) | 0.036864 / 0.141683 (-0.104819) | 1.817985 / 1.452155 (0.365830) | 1.925188 / 1.492716 (0.432472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303205 / 0.018006 (0.285199) | 0.620981 / 0.000490 (0.620491) | 0.004910 / 0.000200 (0.004710) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033791 / 0.037411 (-0.003620) | 0.114974 / 0.014526 (0.100448) | 0.117682 / 0.176557 (-0.058875) | 0.177188 / 0.737135 (-0.559947) | 0.126425 / 0.296338 (-0.169914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636932 / 0.215209 (0.421723) | 6.289054 / 2.077655 (4.211399) | 2.920772 / 1.504120 (1.416652) | 2.672080 / 1.541195 (1.130885) | 2.712271 / 1.468490 (1.243781) | 0.889305 / 4.584777 (-3.695472) | 5.536018 / 3.745712 (1.790306) | 4.687564 / 5.269862 (-0.582298) | 3.204239 / 4.565676 (-1.361437) | 0.095546 / 0.424275 (-0.328729) | 0.008838 / 0.007607 (0.001231) | 0.714584 / 0.226044 (0.488540) | 7.482663 / 2.268929 (5.213735) | 3.621392 / 55.444624 (-51.823232) | 2.987777 / 6.876477 (-3.888700) | 3.312636 / 2.142072 (1.170564) | 1.033721 / 4.805227 (-3.771506) | 0.206292 / 6.500664 (-6.294372) | 0.079423 / 0.075469 (0.003953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.798645 / 1.841788 (-0.043143) | 25.544329 / 8.074308 (17.470021) | 23.041318 / 10.191392 (12.849926) | 0.259067 / 0.680424 (-0.421357) | 0.029839 / 0.534201 (-0.504362) | 0.495583 / 0.579283 (-0.083700) | 0.598755 / 0.434364 (0.164391) | 0.574864 / 0.540337 (0.034527) | 0.831160 / 1.386936 (-0.555776) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4200443045e694a045446950e8b235c7beb6239e \"CML watermark\")\n" ]
2023-07-19T15:00:01
2023-07-19T17:39:11
2023-07-19T17:27:17
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6052", "html_url": "https://github.com/huggingface/datasets/pull/6052", "diff_url": "https://github.com/huggingface/datasets/pull/6052.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6052.patch", "merged_at": "2023-07-19T17:27:17" }
Remove the legacy `HfFileSystem` and deprecate `S3FileSystem` cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6052/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6051/comments
https://api.github.com/repos/huggingface/datasets/issues/6051/events
https://github.com/huggingface/datasets/issues/6051
1,811,549,650
I_kwDODunzps5r-g3S
6,051
Skipping shard in the remote repo and resume upload
{ "login": "rs9000", "id": 9029817, "node_id": "MDQ6VXNlcjkwMjk4MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/9029817?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rs9000", "html_url": "https://github.com/rs9000", "followers_url": "https://api.github.com/users/rs9000/followers", "following_url": "https://api.github.com/users/rs9000/following{/other_user}", "gists_url": "https://api.github.com/users/rs9000/gists{/gist_id}", "starred_url": "https://api.github.com/users/rs9000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rs9000/subscriptions", "organizations_url": "https://api.github.com/users/rs9000/orgs", "repos_url": "https://api.github.com/users/rs9000/repos", "events_url": "https://api.github.com/users/rs9000/events{/privacy}", "received_events_url": "https://api.github.com/users/rs9000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before `push_to_hub` in a faster manner and cache it to disk:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\n# load_dataset(...)\r\nformat = dataset.format\r\ndataset = dataset.with_format(\"arrow\")\r\ndataset = dataset.map(embed_table_storage, batched=True)\r\ndataset = dataset.with_format(**format)\r\n# push_to_hub(...)\r\n```\r\n\r\n(In Datasets 3.0, these external bytes will be written to an Arrow file when generating a dataset to avoid this \"embed\" step)", "Hi, thanks, this solution saves some time.\r\nBut can't we avoid embedding all external image files bytes with each push, skipping the images that have already been pushed into the repo?\r\n\r\nEdit: Ok I missed the part of cache it manually on the disk the first time, this solves the problem. Thank you" ]
2023-07-19T09:25:26
2023-07-20T18:16:01
2023-07-20T18:16:00
NONE
null
null
null
### Describe the bug For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading. From my understanding, the problem is in this part of the code: arrow_dataset.py ```python for index, shard in logging.tqdm( enumerate(itertools.chain([first_shard], shards_iter)), desc="Pushing dataset shards to the dataset hub", total=num_shards, disable=not logging.is_progress_bar_enabled(), ): shard_path_in_repo = path_in_repo(index, shard) # Upload a shard only if it doesn't already exist in the repository if shard_path_in_repo not in data_files: ``` In particular, iterating the generator is slow during the call: ```python self._select_contiguous(start, length, new_fingerprint=new_fingerprint) ``` I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index. ### Steps to reproduce the bug 1. Start the upload ```python dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True) dataset.push_to_hub("repo/name") ``` 2. Stop and restart the upload after hundreds of shards ### Expected behavior Skip the uploaded shards faster. ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6051/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6049/comments
https://api.github.com/repos/huggingface/datasets/issues/6049/events
https://github.com/huggingface/datasets/pull/6049
1,810,378,706
PR_kwDODunzps5Vz1pd
6,049
Update `ruff` version in pre-commit config
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6049). All of your documentation changes will be reflected on that endpoint." ]
2023-07-18T17:13:50
2023-08-29T13:16:28
null
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6049", "html_url": "https://github.com/huggingface/datasets/pull/6049", "diff_url": "https://github.com/huggingface/datasets/pull/6049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6049.patch", "merged_at": null }
so that it corresponds to the one that is being run in CI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6049/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6048/comments
https://api.github.com/repos/huggingface/datasets/issues/6048/events
https://github.com/huggingface/datasets/issues/6048
1,809,629,346
I_kwDODunzps5r3MCi
6,048
when i use datasets.load_dataset, i encounter the http connect error!
{ "login": "yangy1992", "id": 137855591, "node_id": "U_kgDOCDeCZw", "avatar_url": "https://avatars.githubusercontent.com/u/137855591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangy1992", "html_url": "https://github.com/yangy1992", "followers_url": "https://api.github.com/users/yangy1992/followers", "following_url": "https://api.github.com/users/yangy1992/following{/other_user}", "gists_url": "https://api.github.com/users/yangy1992/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangy1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangy1992/subscriptions", "organizations_url": "https://api.github.com/users/yangy1992/orgs", "repos_url": "https://api.github.com/users/yangy1992/repos", "events_url": "https://api.github.com/users/yangy1992/events{/privacy}", "received_events_url": "https://api.github.com/users/yangy1992/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work." ]
2023-07-18T10:16:34
2023-07-18T16:18:39
2023-07-18T16:18:39
NONE
null
null
null
### Describe the bug `common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)` when i run the code above, i got the error as below: -------------------------------------------- ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f299ed082e0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) -------------------------------------------------- My all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet. ### Steps to reproduce the bug 1 ### Expected behavior no error when i use the load_dataset func ### Environment info python=3.8.15
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6048/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6047/comments
https://api.github.com/repos/huggingface/datasets/issues/6047/events
https://github.com/huggingface/datasets/pull/6047
1,809,627,947
PR_kwDODunzps5VxRLA
6,047
Bump dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6047). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006384 / 0.011353 (-0.004969) | 0.003872 / 0.011008 (-0.007136) | 0.083454 / 0.038508 (0.044946) | 0.069120 / 0.023109 (0.046011) | 0.312573 / 0.275898 (0.036675) | 0.345814 / 0.323480 (0.022334) | 0.005729 / 0.007986 (-0.002257) | 0.003225 / 0.004328 (-0.001103) | 0.063950 / 0.004250 (0.059700) | 0.053998 / 0.037052 (0.016946) | 0.316492 / 0.258489 (0.058003) | 0.350738 / 0.293841 (0.056897) | 0.030770 / 0.128546 (-0.097776) | 0.008474 / 0.075646 (-0.067173) | 0.286989 / 0.419271 (-0.132282) | 0.052473 / 0.043533 (0.008940) | 0.314361 / 0.255139 (0.059222) | 0.335170 / 0.283200 (0.051970) | 0.022885 / 0.141683 (-0.118798) | 1.465430 / 1.452155 (0.013275) | 1.527799 / 1.492716 (0.035083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209377 / 0.018006 (0.191371) | 0.455583 / 0.000490 (0.455094) | 0.003352 / 0.000200 (0.003152) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026284 / 0.037411 (-0.011127) | 0.080710 / 0.014526 (0.066185) | 0.091741 / 0.176557 (-0.084816) | 0.147602 / 0.737135 (-0.589534) | 0.091173 / 0.296338 (-0.205166) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386592 / 0.215209 (0.171383) | 3.856665 / 2.077655 (1.779011) | 1.835745 / 1.504120 (0.331625) | 1.671814 / 1.541195 (0.130619) | 1.711224 / 1.468490 (0.242734) | 0.484704 / 4.584777 (-4.100073) | 3.649239 / 3.745712 (-0.096473) | 3.784051 / 5.269862 (-1.485810) | 2.241195 / 4.565676 (-2.324482) | 0.056613 / 0.424275 (-0.367662) | 0.007140 / 0.007607 (-0.000467) | 0.464585 / 0.226044 (0.238540) | 4.616537 / 2.268929 (2.347609) | 2.371969 / 55.444624 (-53.072656) | 1.977754 / 6.876477 (-4.898723) | 2.083385 / 2.142072 (-0.058687) | 0.582330 / 4.805227 (-4.222897) | 0.132744 / 6.500664 (-6.367920) | 0.059822 / 0.075469 (-0.015647) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259566 / 1.841788 (-0.582221) | 18.990166 / 8.074308 (10.915858) | 13.992069 / 10.191392 (3.800677) | 0.160001 / 0.680424 (-0.520423) | 0.018622 / 0.534201 (-0.515579) | 0.392921 / 0.579283 (-0.186362) | 0.418225 / 0.434364 (-0.016139) | 0.471252 / 0.540337 (-0.069086) | 0.653227 / 1.386936 (-0.733709) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006641 / 0.011353 (-0.004712) | 0.003738 / 0.011008 (-0.007271) | 0.064053 / 0.038508 (0.025545) | 0.069467 / 0.023109 (0.046357) | 0.360625 / 0.275898 (0.084727) | 0.394291 / 0.323480 (0.070811) | 0.005236 / 0.007986 (-0.002750) | 0.003304 / 0.004328 (-0.001024) | 0.064078 / 0.004250 (0.059827) | 0.054605 / 0.037052 (0.017552) | 0.374567 / 0.258489 (0.116078) | 0.411227 / 0.293841 (0.117386) | 0.031614 / 0.128546 (-0.096933) | 0.008323 / 0.075646 (-0.067324) | 0.070616 / 0.419271 (-0.348656) | 0.050077 / 0.043533 (0.006544) | 0.362229 / 0.255139 (0.107090) | 0.388310 / 0.283200 (0.105110) | 0.024053 / 0.141683 (-0.117630) | 1.508913 / 1.452155 (0.056759) | 1.562140 / 1.492716 (0.069423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230172 / 0.018006 (0.212165) | 0.449363 / 0.000490 (0.448873) | 0.002374 / 0.000200 (0.002174) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029813 / 0.037411 (-0.007598) | 0.087298 / 0.014526 (0.072772) | 0.096712 / 0.176557 (-0.079845) | 0.152864 / 0.737135 (-0.584271) | 0.098204 / 0.296338 (-0.198135) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408664 / 0.215209 (0.193455) | 4.075068 / 2.077655 (1.997414) | 2.096365 / 1.504120 (0.592245) | 1.936096 / 1.541195 (0.394901) | 1.961872 / 1.468490 (0.493382) | 0.483383 / 4.584777 (-4.101394) | 3.686926 / 3.745712 (-0.058787) | 4.798824 / 5.269862 (-0.471037) | 2.652279 / 4.565676 (-1.913398) | 0.056695 / 0.424275 (-0.367580) | 0.007592 / 0.007607 (-0.000016) | 0.484710 / 0.226044 (0.258665) | 4.842153 / 2.268929 (2.573225) | 2.636828 / 55.444624 (-52.807796) | 2.243666 / 6.876477 (-4.632811) | 2.375972 / 2.142072 (0.233899) | 0.578544 / 4.805227 (-4.226683) | 0.132579 / 6.500664 (-6.368085) | 0.061287 / 0.075469 (-0.014182) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.360287 / 1.841788 (-0.481501) | 19.464110 / 8.074308 (11.389802) | 14.530875 / 10.191392 (4.339483) | 0.149479 / 0.680424 (-0.530944) | 0.018471 / 0.534201 (-0.515730) | 0.395399 / 0.579283 (-0.183884) | 0.412897 / 0.434364 (-0.021467) | 0.465194 / 0.540337 (-0.075144) | 0.611752 / 1.386936 (-0.775184) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#79a535de98b590da7bc223a6498c59790882f14a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008986 / 0.011353 (-0.002367) | 0.005104 / 0.011008 (-0.005905) | 0.108371 / 0.038508 (0.069863) | 0.091655 / 0.023109 (0.068546) | 0.430183 / 0.275898 (0.154285) | 0.481387 / 0.323480 (0.157907) | 0.006662 / 0.007986 (-0.001324) | 0.004681 / 0.004328 (0.000353) | 0.089325 / 0.004250 (0.085075) | 0.065096 / 0.037052 (0.028044) | 0.435021 / 0.258489 (0.176532) | 0.478635 / 0.293841 (0.184794) | 0.047628 / 0.128546 (-0.080918) | 0.013496 / 0.075646 (-0.062150) | 0.389661 / 0.419271 (-0.029611) | 0.082260 / 0.043533 (0.038727) | 0.474165 / 0.255139 (0.219026) | 0.464877 / 0.283200 (0.181677) | 0.039784 / 0.141683 (-0.101899) | 1.874694 / 1.452155 (0.422539) | 1.980183 / 1.492716 (0.487467) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254044 / 0.018006 (0.236038) | 0.631495 / 0.000490 (0.631005) | 0.000628 / 0.000200 (0.000428) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038773 / 0.037411 (0.001362) | 0.103681 / 0.014526 (0.089156) | 0.125081 / 0.176557 (-0.051476) | 0.198345 / 0.737135 (-0.538790) | 0.122217 / 0.296338 (-0.174121) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.611677 / 0.215209 (0.396468) | 6.220790 / 2.077655 (4.143135) | 2.729858 / 1.504120 (1.225739) | 2.351944 / 1.541195 (0.810749) | 2.449137 / 1.468490 (0.980647) | 0.896842 / 4.584777 (-3.687935) | 5.537491 / 3.745712 (1.791778) | 8.480182 / 5.269862 (3.210320) | 5.251404 / 4.565676 (0.685728) | 0.100449 / 0.424275 (-0.323826) | 0.009008 / 0.007607 (0.001401) | 0.750060 / 0.226044 (0.524016) | 7.390940 / 2.268929 (5.122011) | 3.478256 / 55.444624 (-51.966369) | 2.883597 / 6.876477 (-3.992880) | 3.082256 / 2.142072 (0.940183) | 1.114339 / 4.805227 (-3.690889) | 0.225389 / 6.500664 (-6.275275) | 0.083972 / 0.075469 (0.008503) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.741522 / 1.841788 (-0.100266) | 25.674700 / 8.074308 (17.600392) | 24.324412 / 10.191392 (14.133020) | 0.257878 / 0.680424 (-0.422546) | 0.038384 / 0.534201 (-0.495817) | 0.508302 / 0.579283 (-0.070981) | 0.612979 / 0.434364 (0.178615) | 0.584366 / 0.540337 (0.044029) | 0.881115 / 1.386936 (-0.505821) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009114 / 0.011353 (-0.002239) | 0.005333 / 0.011008 (-0.005675) | 0.094944 / 0.038508 (0.056436) | 0.099178 / 0.023109 (0.076068) | 0.529813 / 0.275898 (0.253915) | 0.551282 / 0.323480 (0.227802) | 0.006442 / 0.007986 (-0.001543) | 0.004283 / 0.004328 (-0.000045) | 0.084257 / 0.004250 (0.080007) | 0.067557 / 0.037052 (0.030504) | 0.514733 / 0.258489 (0.256244) | 0.568200 / 0.293841 (0.274359) | 0.050969 / 0.128546 (-0.077577) | 0.014495 / 0.075646 (-0.061151) | 0.097089 / 0.419271 (-0.322182) | 0.063142 / 0.043533 (0.019609) | 0.513327 / 0.255139 (0.258188) | 0.520593 / 0.283200 (0.237394) | 0.036824 / 0.141683 (-0.104859) | 1.954875 / 1.452155 (0.502720) | 1.976307 / 1.492716 (0.483591) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304070 / 0.018006 (0.286063) | 0.611073 / 0.000490 (0.610583) | 0.005027 / 0.000200 (0.004827) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037993 / 0.037411 (0.000582) | 0.115876 / 0.014526 (0.101350) | 0.118087 / 0.176557 (-0.058469) | 0.186437 / 0.737135 (-0.550699) | 0.129883 / 0.296338 (-0.166456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.658292 / 0.215209 (0.443083) | 6.618257 / 2.077655 (4.540602) | 3.203786 / 1.504120 (1.699667) | 2.858714 / 1.541195 (1.317519) | 2.940974 / 1.468490 (1.472484) | 0.856238 / 4.584777 (-3.728538) | 5.427708 / 3.745712 (1.681996) | 4.810048 / 5.269862 (-0.459813) | 3.120006 / 4.565676 (-1.445671) | 0.098098 / 0.424275 (-0.326177) | 0.010077 / 0.007607 (0.002470) | 0.790890 / 0.226044 (0.564845) | 7.956679 / 2.268929 (5.687750) | 3.955710 / 55.444624 (-51.488914) | 3.446419 / 6.876477 (-3.430057) | 3.541228 / 2.142072 (1.399156) | 1.013420 / 4.805227 (-3.791808) | 0.213741 / 6.500664 (-6.286923) | 0.080857 / 0.075469 (0.005388) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.813265 / 1.841788 (-0.028522) | 25.965199 / 8.074308 (17.890891) | 21.892761 / 10.191392 (11.701369) | 0.257843 / 0.680424 (-0.422580) | 0.029388 / 0.534201 (-0.504813) | 0.510609 / 0.579283 (-0.068674) | 0.626579 / 0.434364 (0.192215) | 0.576865 / 0.540337 (0.036528) | 0.826610 / 1.386936 (-0.560326) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a1a9c00249b330f97f66ceb86c2939261091f4fe \"CML watermark\")\n" ]
2023-07-18T10:15:39
2023-07-18T10:28:01
2023-07-18T10:15:52
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6047", "html_url": "https://github.com/huggingface/datasets/pull/6047", "diff_url": "https://github.com/huggingface/datasets/pull/6047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6047.patch", "merged_at": "2023-07-18T10:15:52" }
workaround to fix an issue with transformers CI https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6047/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6046/comments
https://api.github.com/repos/huggingface/datasets/issues/6046/events
https://github.com/huggingface/datasets/issues/6046
1,808,154,414
I_kwDODunzps5rxj8u
6,046
Support proxy and user-agent in fsspec calls
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
[]
2023-07-17T16:39:26
2023-07-17T16:40:37
null
MEMBER
null
null
null
Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent. Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub. This can be implemented in `_prepare_single_hop_path_and_storage_options`. Though ideally the `HfFileSystem` could support passing at least the proxies
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6046/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6045/comments
https://api.github.com/repos/huggingface/datasets/issues/6045/events
https://github.com/huggingface/datasets/pull/6045
1,808,072,270
PR_kwDODunzps5Vr-r1
6,045
Check if column names match in Parquet loader only when config `features` are specified
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006557 / 0.011353 (-0.004796) | 0.004096 / 0.011008 (-0.006913) | 0.083577 / 0.038508 (0.045069) | 0.072092 / 0.023109 (0.048983) | 0.319192 / 0.275898 (0.043294) | 0.351845 / 0.323480 (0.028365) | 0.005475 / 0.007986 (-0.002511) | 0.003419 / 0.004328 (-0.000910) | 0.064562 / 0.004250 (0.060311) | 0.057930 / 0.037052 (0.020878) | 0.326085 / 0.258489 (0.067596) | 0.368316 / 0.293841 (0.074475) | 0.030502 / 0.128546 (-0.098044) | 0.008504 / 0.075646 (-0.067142) | 0.287217 / 0.419271 (-0.132054) | 0.052337 / 0.043533 (0.008804) | 0.319011 / 0.255139 (0.063872) | 0.352711 / 0.283200 (0.069511) | 0.023278 / 0.141683 (-0.118405) | 1.482578 / 1.452155 (0.030423) | 1.553391 / 1.492716 (0.060675) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199628 / 0.018006 (0.181622) | 0.464571 / 0.000490 (0.464081) | 0.003512 / 0.000200 (0.003312) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029109 / 0.037411 (-0.008302) | 0.082203 / 0.014526 (0.067677) | 0.096223 / 0.176557 (-0.080333) | 0.155598 / 0.737135 (-0.581537) | 0.097738 / 0.296338 (-0.198600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386135 / 0.215209 (0.170926) | 3.837157 / 2.077655 (1.759502) | 1.836869 / 1.504120 (0.332750) | 1.680592 / 1.541195 (0.139398) | 1.769456 / 1.468490 (0.300966) | 0.493150 / 4.584777 (-4.091627) | 3.589797 / 3.745712 (-0.155915) | 3.330000 / 5.269862 (-1.939861) | 2.059856 / 4.565676 (-2.505821) | 0.057951 / 0.424275 (-0.366324) | 0.007340 / 0.007607 (-0.000267) | 0.463203 / 0.226044 (0.237159) | 4.631514 / 2.268929 (2.362585) | 2.329887 / 55.444624 (-53.114738) | 2.008815 / 6.876477 (-4.867662) | 2.199067 / 2.142072 (0.056995) | 0.591417 / 4.805227 (-4.213810) | 0.137154 / 6.500664 (-6.363510) | 0.061326 / 0.075469 (-0.014143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269676 / 1.841788 (-0.572111) | 19.375167 / 8.074308 (11.300858) | 13.945419 / 10.191392 (3.754027) | 0.146482 / 0.680424 (-0.533942) | 0.018257 / 0.534201 (-0.515944) | 0.391684 / 0.579283 (-0.187599) | 0.411454 / 0.434364 (-0.022910) | 0.466260 / 0.540337 (-0.074077) | 0.655571 / 1.386936 (-0.731365) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006619 / 0.011353 (-0.004734) | 0.004102 / 0.011008 (-0.006907) | 0.064848 / 0.038508 (0.026340) | 0.074822 / 0.023109 (0.051713) | 0.366535 / 0.275898 (0.090637) | 0.395873 / 0.323480 (0.072394) | 0.005315 / 0.007986 (-0.002670) | 0.003270 / 0.004328 (-0.001059) | 0.064829 / 0.004250 (0.060578) | 0.056094 / 0.037052 (0.019042) | 0.370355 / 0.258489 (0.111866) | 0.406837 / 0.293841 (0.112996) | 0.031634 / 0.128546 (-0.096912) | 0.008569 / 0.075646 (-0.067077) | 0.071126 / 0.419271 (-0.348145) | 0.048629 / 0.043533 (0.005096) | 0.365175 / 0.255139 (0.110036) | 0.385234 / 0.283200 (0.102034) | 0.023295 / 0.141683 (-0.118388) | 1.466907 / 1.452155 (0.014752) | 1.523118 / 1.492716 (0.030401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227872 / 0.018006 (0.209866) | 0.451573 / 0.000490 (0.451083) | 0.000379 / 0.000200 (0.000179) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029496 / 0.037411 (-0.007915) | 0.086614 / 0.014526 (0.072088) | 0.098165 / 0.176557 (-0.078392) | 0.152218 / 0.737135 (-0.584917) | 0.101215 / 0.296338 (-0.195123) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407519 / 0.215209 (0.192310) | 4.074704 / 2.077655 (1.997049) | 2.113185 / 1.504120 (0.609065) | 1.947461 / 1.541195 (0.406266) | 1.998521 / 1.468490 (0.530031) | 0.487463 / 4.584777 (-4.097313) | 3.465423 / 3.745712 (-0.280289) | 3.376498 / 5.269862 (-1.893363) | 2.001533 / 4.565676 (-2.564144) | 0.057052 / 0.424275 (-0.367223) | 0.007325 / 0.007607 (-0.000283) | 0.485648 / 0.226044 (0.259604) | 4.860191 / 2.268929 (2.591262) | 2.550340 / 55.444624 (-52.894284) | 2.231136 / 6.876477 (-4.645341) | 2.262539 / 2.142072 (0.120467) | 0.591422 / 4.805227 (-4.213805) | 0.132875 / 6.500664 (-6.367789) | 0.062154 / 0.075469 (-0.013315) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.321834 / 1.841788 (-0.519954) | 19.734750 / 8.074308 (11.660442) | 14.681049 / 10.191392 (4.489657) | 0.148894 / 0.680424 (-0.531530) | 0.018414 / 0.534201 (-0.515787) | 0.393377 / 0.579283 (-0.185906) | 0.402795 / 0.434364 (-0.031569) | 0.478624 / 0.540337 (-0.061714) | 0.656767 / 1.386936 (-0.730169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5a84a1fa226a4cafb3bb4387dc4b212a46caf31 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007012 / 0.011353 (-0.004341) | 0.004120 / 0.011008 (-0.006888) | 0.083720 / 0.038508 (0.045212) | 0.083105 / 0.023109 (0.059996) | 0.323803 / 0.275898 (0.047905) | 0.340345 / 0.323480 (0.016865) | 0.005872 / 0.007986 (-0.002113) | 0.003528 / 0.004328 (-0.000801) | 0.065185 / 0.004250 (0.060935) | 0.063092 / 0.037052 (0.026040) | 0.314900 / 0.258489 (0.056411) | 0.349251 / 0.293841 (0.055410) | 0.031612 / 0.128546 (-0.096934) | 0.008541 / 0.075646 (-0.067105) | 0.289865 / 0.419271 (-0.129407) | 0.055264 / 0.043533 (0.011731) | 0.309152 / 0.255139 (0.054013) | 0.332625 / 0.283200 (0.049425) | 0.024306 / 0.141683 (-0.117377) | 1.489191 / 1.452155 (0.037037) | 1.562447 / 1.492716 (0.069731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236681 / 0.018006 (0.218675) | 0.567767 / 0.000490 (0.567277) | 0.003022 / 0.000200 (0.002822) | 0.000218 / 0.000054 (0.000164) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028698 / 0.037411 (-0.008714) | 0.081681 / 0.014526 (0.067155) | 0.099109 / 0.176557 (-0.077447) | 0.154381 / 0.737135 (-0.582754) | 0.098691 / 0.296338 (-0.197648) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397985 / 0.215209 (0.182776) | 3.962499 / 2.077655 (1.884844) | 1.936158 / 1.504120 (0.432038) | 1.762339 / 1.541195 (0.221144) | 1.837451 / 1.468490 (0.368961) | 0.485655 / 4.584777 (-4.099122) | 3.538341 / 3.745712 (-0.207371) | 5.110095 / 5.269862 (-0.159767) | 3.066152 / 4.565676 (-1.499524) | 0.057505 / 0.424275 (-0.366770) | 0.007334 / 0.007607 (-0.000273) | 0.475622 / 0.226044 (0.249578) | 4.754091 / 2.268929 (2.485162) | 2.431379 / 55.444624 (-53.013246) | 2.106178 / 6.876477 (-4.770298) | 2.364305 / 2.142072 (0.222232) | 0.614038 / 4.805227 (-4.191190) | 0.148530 / 6.500664 (-6.352134) | 0.061033 / 0.075469 (-0.014436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.242345 / 1.841788 (-0.599443) | 19.017266 / 8.074308 (10.942958) | 13.477782 / 10.191392 (3.286390) | 0.158513 / 0.680424 (-0.521911) | 0.018757 / 0.534201 (-0.515444) | 0.393773 / 0.579283 (-0.185510) | 0.416933 / 0.434364 (-0.017431) | 0.460012 / 0.540337 (-0.080326) | 0.637010 / 1.386936 (-0.749926) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006689 / 0.011353 (-0.004664) | 0.004168 / 0.011008 (-0.006840) | 0.065009 / 0.038508 (0.026501) | 0.073766 / 0.023109 (0.050657) | 0.369585 / 0.275898 (0.093687) | 0.407945 / 0.323480 (0.084465) | 0.005583 / 0.007986 (-0.002403) | 0.003494 / 0.004328 (-0.000835) | 0.065032 / 0.004250 (0.060782) | 0.057166 / 0.037052 (0.020114) | 0.370656 / 0.258489 (0.112166) | 0.428381 / 0.293841 (0.134540) | 0.031653 / 0.128546 (-0.096893) | 0.008731 / 0.075646 (-0.066915) | 0.071624 / 0.419271 (-0.347648) | 0.049364 / 0.043533 (0.005832) | 0.361824 / 0.255139 (0.106685) | 0.387615 / 0.283200 (0.104415) | 0.023228 / 0.141683 (-0.118455) | 1.476204 / 1.452155 (0.024049) | 1.553522 / 1.492716 (0.060806) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266955 / 0.018006 (0.248948) | 0.556566 / 0.000490 (0.556076) | 0.000399 / 0.000200 (0.000199) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033104 / 0.037411 (-0.004307) | 0.088067 / 0.014526 (0.073541) | 0.103333 / 0.176557 (-0.073224) | 0.157061 / 0.737135 (-0.580074) | 0.105007 / 0.296338 (-0.191331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420826 / 0.215209 (0.205617) | 4.201656 / 2.077655 (2.124001) | 2.208336 / 1.504120 (0.704216) | 2.043780 / 1.541195 (0.502585) | 2.156215 / 1.468490 (0.687725) | 0.490485 / 4.584777 (-4.094292) | 3.611446 / 3.745712 (-0.134267) | 5.293140 / 5.269862 (0.023279) | 2.739778 / 4.565676 (-1.825899) | 0.058175 / 0.424275 (-0.366100) | 0.007633 / 0.007607 (0.000026) | 0.500773 / 0.226044 (0.274729) | 5.000900 / 2.268929 (2.731971) | 2.721200 / 55.444624 (-52.723424) | 2.349381 / 6.876477 (-4.527095) | 2.386261 / 2.142072 (0.244188) | 0.583174 / 4.805227 (-4.222053) | 0.134558 / 6.500664 (-6.366106) | 0.062157 / 0.075469 (-0.013312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351087 / 1.841788 (-0.490701) | 20.305703 / 8.074308 (12.231395) | 14.548518 / 10.191392 (4.357126) | 0.173720 / 0.680424 (-0.506704) | 0.018100 / 0.534201 (-0.516101) | 0.395187 / 0.579283 (-0.184097) | 0.414619 / 0.434364 (-0.019745) | 0.462515 / 0.540337 (-0.077823) | 0.617822 / 1.386936 (-0.769114) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#033d0a9de5c825fc9a6a9ce3c3d80eaab3493720 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006909 / 0.011353 (-0.004444) | 0.003954 / 0.011008 (-0.007054) | 0.084329 / 0.038508 (0.045821) | 0.074919 / 0.023109 (0.051809) | 0.319350 / 0.275898 (0.043451) | 0.347264 / 0.323480 (0.023785) | 0.005326 / 0.007986 (-0.002660) | 0.003323 / 0.004328 (-0.001006) | 0.064286 / 0.004250 (0.060036) | 0.054748 / 0.037052 (0.017696) | 0.324784 / 0.258489 (0.066295) | 0.361445 / 0.293841 (0.067605) | 0.031239 / 0.128546 (-0.097308) | 0.008361 / 0.075646 (-0.067286) | 0.287482 / 0.419271 (-0.131789) | 0.052093 / 0.043533 (0.008560) | 0.321454 / 0.255139 (0.066315) | 0.337999 / 0.283200 (0.054800) | 0.025807 / 0.141683 (-0.115876) | 1.501838 / 1.452155 (0.049683) | 1.574484 / 1.492716 (0.081767) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193220 / 0.018006 (0.175214) | 0.448105 / 0.000490 (0.447615) | 0.002949 / 0.000200 (0.002749) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028517 / 0.037411 (-0.008894) | 0.087281 / 0.014526 (0.072755) | 0.098295 / 0.176557 (-0.078262) | 0.156972 / 0.737135 (-0.580163) | 0.101250 / 0.296338 (-0.195088) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383734 / 0.215209 (0.168525) | 3.821293 / 2.077655 (1.743638) | 1.866487 / 1.504120 (0.362367) | 1.722195 / 1.541195 (0.181000) | 1.843762 / 1.468490 (0.375272) | 0.484813 / 4.584777 (-4.099964) | 3.535381 / 3.745712 (-0.210331) | 5.502338 / 5.269862 (0.232477) | 3.256078 / 4.565676 (-1.309599) | 0.057312 / 0.424275 (-0.366963) | 0.007305 / 0.007607 (-0.000302) | 0.461523 / 0.226044 (0.235479) | 4.611828 / 2.268929 (2.342899) | 2.337180 / 55.444624 (-53.107445) | 2.040956 / 6.876477 (-4.835521) | 2.241233 / 2.142072 (0.099160) | 0.583727 / 4.805227 (-4.221500) | 0.132427 / 6.500664 (-6.368237) | 0.060306 / 0.075469 (-0.015163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282223 / 1.841788 (-0.559565) | 19.439745 / 8.074308 (11.365437) | 13.627657 / 10.191392 (3.436265) | 0.158975 / 0.680424 (-0.521449) | 0.018599 / 0.534201 (-0.515601) | 0.391136 / 0.579283 (-0.188147) | 0.410947 / 0.434364 (-0.023417) | 0.453889 / 0.540337 (-0.086448) | 0.620928 / 1.386936 (-0.766008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006428 / 0.011353 (-0.004925) | 0.003980 / 0.011008 (-0.007028) | 0.065006 / 0.038508 (0.026498) | 0.076541 / 0.023109 (0.053432) | 0.358518 / 0.275898 (0.082620) | 0.394397 / 0.323480 (0.070917) | 0.005845 / 0.007986 (-0.002140) | 0.003258 / 0.004328 (-0.001071) | 0.064436 / 0.004250 (0.060186) | 0.056691 / 0.037052 (0.019639) | 0.367369 / 0.258489 (0.108880) | 0.420345 / 0.293841 (0.126504) | 0.031047 / 0.128546 (-0.097499) | 0.008430 / 0.075646 (-0.067216) | 0.071280 / 0.419271 (-0.347991) | 0.048872 / 0.043533 (0.005339) | 0.360073 / 0.255139 (0.104934) | 0.384150 / 0.283200 (0.100951) | 0.023189 / 0.141683 (-0.118494) | 1.500251 / 1.452155 (0.048096) | 1.545910 / 1.492716 (0.053194) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224861 / 0.018006 (0.206855) | 0.439901 / 0.000490 (0.439411) | 0.000372 / 0.000200 (0.000172) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029914 / 0.037411 (-0.007497) | 0.086916 / 0.014526 (0.072390) | 0.099527 / 0.176557 (-0.077029) | 0.153031 / 0.737135 (-0.584104) | 0.100008 / 0.296338 (-0.196330) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420305 / 0.215209 (0.205096) | 4.198224 / 2.077655 (2.120569) | 2.223807 / 1.504120 (0.719687) | 2.058475 / 1.541195 (0.517280) | 2.140405 / 1.468490 (0.671915) | 0.481224 / 4.584777 (-4.103553) | 3.593767 / 3.745712 (-0.151945) | 5.536710 / 5.269862 (0.266849) | 3.162048 / 4.565676 (-1.403629) | 0.056662 / 0.424275 (-0.367614) | 0.007301 / 0.007607 (-0.000306) | 0.507494 / 0.226044 (0.281450) | 5.047824 / 2.268929 (2.778896) | 2.715167 / 55.444624 (-52.729458) | 2.334916 / 6.876477 (-4.541560) | 2.406615 / 2.142072 (0.264543) | 0.572761 / 4.805227 (-4.232466) | 0.131248 / 6.500664 (-6.369416) | 0.062401 / 0.075469 (-0.013068) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.375896 / 1.841788 (-0.465892) | 19.836638 / 8.074308 (11.762329) | 14.246645 / 10.191392 (4.055253) | 0.164975 / 0.680424 (-0.515449) | 0.018293 / 0.534201 (-0.515908) | 0.394196 / 0.579283 (-0.185087) | 0.405895 / 0.434364 (-0.028469) | 0.459221 / 0.540337 (-0.081116) | 0.609898 / 1.386936 (-0.777038) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f89210ad839c2225b64822dfa248f68ab29ad46f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008463 / 0.011353 (-0.002890) | 0.004754 / 0.011008 (-0.006254) | 0.103574 / 0.038508 (0.065066) | 0.083541 / 0.023109 (0.060432) | 0.402498 / 0.275898 (0.126600) | 0.434944 / 0.323480 (0.111465) | 0.005766 / 0.007986 (-0.002219) | 0.003823 / 0.004328 (-0.000505) | 0.078433 / 0.004250 (0.074183) | 0.056948 / 0.037052 (0.019895) | 0.392539 / 0.258489 (0.134050) | 0.447226 / 0.293841 (0.153385) | 0.045845 / 0.128546 (-0.082701) | 0.014043 / 0.075646 (-0.061603) | 0.355768 / 0.419271 (-0.063503) | 0.065492 / 0.043533 (0.021960) | 0.408047 / 0.255139 (0.152908) | 0.468313 / 0.283200 (0.185113) | 0.033779 / 0.141683 (-0.107904) | 1.772198 / 1.452155 (0.320043) | 1.889127 / 1.492716 (0.396411) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207107 / 0.018006 (0.189101) | 0.533261 / 0.000490 (0.532771) | 0.000864 / 0.000200 (0.000664) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032139 / 0.037411 (-0.005272) | 0.102002 / 0.014526 (0.087476) | 0.108780 / 0.176557 (-0.067777) | 0.202857 / 0.737135 (-0.534278) | 0.110378 / 0.296338 (-0.185960) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582814 / 0.215209 (0.367605) | 5.870683 / 2.077655 (3.793028) | 2.510290 / 1.504120 (1.006171) | 2.146337 / 1.541195 (0.605142) | 2.239278 / 1.468490 (0.770788) | 0.861205 / 4.584777 (-3.723572) | 5.177394 / 3.745712 (1.431682) | 8.550713 / 5.269862 (3.280852) | 4.867715 / 4.565676 (0.302038) | 0.096665 / 0.424275 (-0.327610) | 0.008702 / 0.007607 (0.001095) | 0.748908 / 0.226044 (0.522863) | 7.302815 / 2.268929 (5.033887) | 3.205045 / 55.444624 (-52.239580) | 2.743914 / 6.876477 (-4.132562) | 2.831240 / 2.142072 (0.689167) | 1.103912 / 4.805227 (-3.701315) | 0.246075 / 6.500664 (-6.254589) | 0.092092 / 0.075469 (0.016623) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.591331 / 1.841788 (-0.250457) | 23.085848 / 8.074308 (15.011540) | 22.887963 / 10.191392 (12.696571) | 0.212735 / 0.680424 (-0.467689) | 0.027400 / 0.534201 (-0.506801) | 0.493822 / 0.579283 (-0.085461) | 0.574485 / 0.434364 (0.140121) | 0.574873 / 0.540337 (0.034536) | 0.826178 / 1.386936 (-0.560758) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009155 / 0.011353 (-0.002198) | 0.004976 / 0.011008 (-0.006032) | 0.079308 / 0.038508 (0.040799) | 0.093959 / 0.023109 (0.070850) | 0.449110 / 0.275898 (0.173212) | 0.493356 / 0.323480 (0.169876) | 0.006317 / 0.007986 (-0.001669) | 0.004179 / 0.004328 (-0.000150) | 0.076991 / 0.004250 (0.072740) | 0.061977 / 0.037052 (0.024924) | 0.493823 / 0.258489 (0.235333) | 0.491609 / 0.293841 (0.197768) | 0.049552 / 0.128546 (-0.078994) | 0.015174 / 0.075646 (-0.060472) | 0.090431 / 0.419271 (-0.328841) | 0.061597 / 0.043533 (0.018064) | 0.467672 / 0.255139 (0.212533) | 0.490542 / 0.283200 (0.207342) | 0.035048 / 0.141683 (-0.106635) | 1.807939 / 1.452155 (0.355784) | 1.854859 / 1.492716 (0.362142) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236672 / 0.018006 (0.218666) | 0.542236 / 0.000490 (0.541746) | 0.016334 / 0.000200 (0.016134) | 0.000220 / 0.000054 (0.000165) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032051 / 0.037411 (-0.005360) | 0.115352 / 0.014526 (0.100826) | 0.125115 / 0.176557 (-0.051441) | 0.173670 / 0.737135 (-0.563466) | 0.117832 / 0.296338 (-0.178507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631513 / 0.215209 (0.416304) | 6.371688 / 2.077655 (4.294033) | 2.867240 / 1.504120 (1.363120) | 2.454907 / 1.541195 (0.913713) | 2.518860 / 1.468490 (1.050370) | 0.879973 / 4.584777 (-3.704804) | 5.170263 / 3.745712 (1.424551) | 7.986429 / 5.269862 (2.716567) | 4.828095 / 4.565676 (0.262418) | 0.097808 / 0.424275 (-0.326468) | 0.010541 / 0.007607 (0.002934) | 0.745601 / 0.226044 (0.519557) | 7.631683 / 2.268929 (5.362755) | 3.524255 / 55.444624 (-51.920369) | 2.866199 / 6.876477 (-4.010278) | 2.982483 / 2.142072 (0.840410) | 1.148957 / 4.805227 (-3.656270) | 0.217067 / 6.500664 (-6.283598) | 0.074357 / 0.075469 (-0.001112) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714917 / 1.841788 (-0.126871) | 24.151348 / 8.074308 (16.077040) | 21.993604 / 10.191392 (11.802212) | 0.234883 / 0.680424 (-0.445541) | 0.028182 / 0.534201 (-0.506019) | 0.474050 / 0.579283 (-0.105233) | 0.557012 / 0.434364 (0.122648) | 0.537823 / 0.540337 (-0.002514) | 0.741488 / 1.386936 (-0.645448) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c2e5a7a01a952a17d0424e93c3be2b4a5ffca7da \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007640 / 0.011353 (-0.003713) | 0.004776 / 0.011008 (-0.006232) | 0.101582 / 0.038508 (0.063074) | 0.085113 / 0.023109 (0.062003) | 0.376000 / 0.275898 (0.100102) | 0.421117 / 0.323480 (0.097637) | 0.006095 / 0.007986 (-0.001891) | 0.003884 / 0.004328 (-0.000445) | 0.077263 / 0.004250 (0.073013) | 0.065262 / 0.037052 (0.028210) | 0.384041 / 0.258489 (0.125552) | 0.442229 / 0.293841 (0.148388) | 0.035706 / 0.128546 (-0.092840) | 0.009996 / 0.075646 (-0.065651) | 0.344925 / 0.419271 (-0.074346) | 0.062358 / 0.043533 (0.018825) | 0.371738 / 0.255139 (0.116599) | 0.407093 / 0.283200 (0.123894) | 0.026996 / 0.141683 (-0.114687) | 1.762705 / 1.452155 (0.310550) | 1.846777 / 1.492716 (0.354061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219660 / 0.018006 (0.201653) | 0.521795 / 0.000490 (0.521305) | 0.005344 / 0.000200 (0.005145) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036027 / 0.037411 (-0.001385) | 0.100309 / 0.014526 (0.085784) | 0.113041 / 0.176557 (-0.063515) | 0.190037 / 0.737135 (-0.547099) | 0.114552 / 0.296338 (-0.181786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.466364 / 0.215209 (0.251154) | 4.638745 / 2.077655 (2.561090) | 2.317875 / 1.504120 (0.813755) | 2.099241 / 1.541195 (0.558046) | 2.149827 / 1.468490 (0.681337) | 0.578913 / 4.584777 (-4.005864) | 4.281866 / 3.745712 (0.536154) | 3.778453 / 5.269862 (-1.491408) | 2.411704 / 4.565676 (-2.153972) | 0.068556 / 0.424275 (-0.355719) | 0.008779 / 0.007607 (0.001172) | 0.553165 / 0.226044 (0.327121) | 5.524520 / 2.268929 (3.255591) | 2.848444 / 55.444624 (-52.596181) | 2.468591 / 6.876477 (-4.407885) | 2.652117 / 2.142072 (0.510045) | 0.694124 / 4.805227 (-4.111103) | 0.157087 / 6.500664 (-6.343577) | 0.070706 / 0.075469 (-0.004763) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.492031 / 1.841788 (-0.349757) | 23.086596 / 8.074308 (15.012288) | 16.791351 / 10.191392 (6.599959) | 0.203932 / 0.680424 (-0.476492) | 0.021736 / 0.534201 (-0.512464) | 0.468344 / 0.579283 (-0.110939) | 0.493790 / 0.434364 (0.059426) | 0.563226 / 0.540337 (0.022889) | 0.780384 / 1.386936 (-0.606553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.004696 / 0.011008 (-0.006312) | 0.076712 / 0.038508 (0.038204) | 0.095915 / 0.023109 (0.072805) | 0.433615 / 0.275898 (0.157717) | 0.482477 / 0.323480 (0.158997) | 0.007029 / 0.007986 (-0.000957) | 0.003842 / 0.004328 (-0.000487) | 0.076331 / 0.004250 (0.072081) | 0.069755 / 0.037052 (0.032703) | 0.458914 / 0.258489 (0.200425) | 0.486155 / 0.293841 (0.192314) | 0.036966 / 0.128546 (-0.091580) | 0.010082 / 0.075646 (-0.065564) | 0.083886 / 0.419271 (-0.335385) | 0.059329 / 0.043533 (0.015796) | 0.453782 / 0.255139 (0.198643) | 0.459508 / 0.283200 (0.176308) | 0.028400 / 0.141683 (-0.113283) | 1.796406 / 1.452155 (0.344251) | 1.881161 / 1.492716 (0.388445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235053 / 0.018006 (0.217047) | 0.501907 / 0.000490 (0.501417) | 0.005211 / 0.000200 (0.005011) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037752 / 0.037411 (0.000341) | 0.107299 / 0.014526 (0.092773) | 0.120307 / 0.176557 (-0.056250) | 0.187542 / 0.737135 (-0.549593) | 0.121805 / 0.296338 (-0.174533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.490039 / 0.215209 (0.274830) | 4.919169 / 2.077655 (2.841515) | 2.520610 / 1.504120 (1.016490) | 2.324473 / 1.541195 (0.783279) | 2.421195 / 1.468490 (0.952705) | 0.576314 / 4.584777 (-4.008463) | 4.304752 / 3.745712 (0.559040) | 3.881151 / 5.269862 (-1.388710) | 2.409777 / 4.565676 (-2.155900) | 0.067400 / 0.424275 (-0.356875) | 0.009235 / 0.007607 (0.001627) | 0.586601 / 0.226044 (0.360556) | 5.850080 / 2.268929 (3.581152) | 3.064859 / 55.444624 (-52.379766) | 2.701734 / 6.876477 (-4.174743) | 2.926190 / 2.142072 (0.784117) | 0.698511 / 4.805227 (-4.106716) | 0.158273 / 6.500664 (-6.342392) | 0.074530 / 0.075469 (-0.000939) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.607113 / 1.841788 (-0.234674) | 23.499279 / 8.074308 (15.424971) | 17.049509 / 10.191392 (6.858117) | 0.175689 / 0.680424 (-0.504735) | 0.021762 / 0.534201 (-0.512439) | 0.491450 / 0.579283 (-0.087833) | 0.487557 / 0.434364 (0.053193) | 0.570104 / 0.540337 (0.029766) | 0.761527 / 1.386936 (-0.625409) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7096c59e6a8f4d5b16f3b906075f9e2ed83bbb25 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008725 / 0.011353 (-0.002628) | 0.005156 / 0.011008 (-0.005852) | 0.095147 / 0.038508 (0.056639) | 0.084916 / 0.023109 (0.061807) | 0.390769 / 0.275898 (0.114871) | 0.434716 / 0.323480 (0.111237) | 0.005982 / 0.007986 (-0.002004) | 0.004323 / 0.004328 (-0.000006) | 0.074712 / 0.004250 (0.070461) | 0.058889 / 0.037052 (0.021837) | 0.403997 / 0.258489 (0.145508) | 0.443361 / 0.293841 (0.149520) | 0.045908 / 0.128546 (-0.082639) | 0.013562 / 0.075646 (-0.062085) | 0.330683 / 0.419271 (-0.088588) | 0.064821 / 0.043533 (0.021288) | 0.407202 / 0.255139 (0.152063) | 0.409930 / 0.283200 (0.126730) | 0.032693 / 0.141683 (-0.108990) | 1.630181 / 1.452155 (0.178026) | 1.729680 / 1.492716 (0.236963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261240 / 0.018006 (0.243234) | 0.581850 / 0.000490 (0.581360) | 0.002997 / 0.000200 (0.002797) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029279 / 0.037411 (-0.008133) | 0.085004 / 0.014526 (0.070478) | 0.127782 / 0.176557 (-0.048774) | 0.168852 / 0.737135 (-0.568283) | 0.098697 / 0.296338 (-0.197641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.546417 / 0.215209 (0.331208) | 5.602186 / 2.077655 (3.524531) | 2.597049 / 1.504120 (1.092930) | 2.384880 / 1.541195 (0.843685) | 2.444516 / 1.468490 (0.976026) | 0.796562 / 4.584777 (-3.788214) | 5.239440 / 3.745712 (1.493727) | 7.087768 / 5.269862 (1.817906) | 4.308476 / 4.565676 (-0.257200) | 0.091215 / 0.424275 (-0.333060) | 0.007942 / 0.007607 (0.000335) | 0.690059 / 0.226044 (0.464015) | 6.727809 / 2.268929 (4.458880) | 3.294522 / 55.444624 (-52.150103) | 2.604088 / 6.876477 (-4.272389) | 2.786970 / 2.142072 (0.644898) | 0.918817 / 4.805227 (-3.886410) | 0.191451 / 6.500664 (-6.309213) | 0.069557 / 0.075469 (-0.005912) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.486377 / 1.841788 (-0.355411) | 22.363470 / 8.074308 (14.289162) | 19.963684 / 10.191392 (9.772292) | 0.204161 / 0.680424 (-0.476263) | 0.034570 / 0.534201 (-0.499631) | 0.467937 / 0.579283 (-0.111346) | 0.564870 / 0.434364 (0.130506) | 0.511133 / 0.540337 (-0.029204) | 0.777084 / 1.386936 (-0.609852) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008612 / 0.011353 (-0.002741) | 0.004993 / 0.011008 (-0.006015) | 0.080769 / 0.038508 (0.042261) | 0.075923 / 0.023109 (0.052814) | 0.442271 / 0.275898 (0.166373) | 0.495625 / 0.323480 (0.172146) | 0.006467 / 0.007986 (-0.001518) | 0.004001 / 0.004328 (-0.000328) | 0.077309 / 0.004250 (0.073059) | 0.063466 / 0.037052 (0.026414) | 0.452460 / 0.258489 (0.193971) | 0.494063 / 0.293841 (0.200223) | 0.045751 / 0.128546 (-0.082796) | 0.013402 / 0.075646 (-0.062245) | 0.085760 / 0.419271 (-0.333511) | 0.056532 / 0.043533 (0.012999) | 0.440596 / 0.255139 (0.185457) | 0.459540 / 0.283200 (0.176340) | 0.035897 / 0.141683 (-0.105786) | 1.728264 / 1.452155 (0.276109) | 1.808142 / 1.492716 (0.315426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285094 / 0.018006 (0.267088) | 0.598440 / 0.000490 (0.597950) | 0.003476 / 0.000200 (0.003276) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035106 / 0.037411 (-0.002305) | 0.091724 / 0.014526 (0.077198) | 0.122803 / 0.176557 (-0.053754) | 0.182114 / 0.737135 (-0.555022) | 0.116196 / 0.296338 (-0.180143) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.585420 / 0.215209 (0.370211) | 5.790370 / 2.077655 (3.712715) | 2.833247 / 1.504120 (1.329127) | 2.627949 / 1.541195 (1.086755) | 2.643050 / 1.468490 (1.174560) | 0.792036 / 4.584777 (-3.792741) | 5.145084 / 3.745712 (1.399372) | 4.423679 / 5.269862 (-0.846182) | 2.802778 / 4.565676 (-1.762898) | 0.093983 / 0.424275 (-0.330292) | 0.009260 / 0.007607 (0.001652) | 0.720302 / 0.226044 (0.494258) | 7.116959 / 2.268929 (4.848031) | 3.574782 / 55.444624 (-51.869843) | 3.009330 / 6.876477 (-3.867147) | 3.126488 / 2.142072 (0.984415) | 0.949144 / 4.805227 (-3.856083) | 0.195143 / 6.500664 (-6.305521) | 0.072490 / 0.075469 (-0.002979) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.626368 / 1.841788 (-0.215419) | 23.683021 / 8.074308 (15.608713) | 20.085297 / 10.191392 (9.893905) | 0.267057 / 0.680424 (-0.413367) | 0.028306 / 0.534201 (-0.505894) | 0.478448 / 0.579283 (-0.100835) | 0.597619 / 0.434364 (0.163256) | 0.544737 / 0.540337 (0.004399) | 0.761805 / 1.386936 (-0.625131) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ac53b590916c8d859fabcc2ef23c12add7f22f7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009359 / 0.011353 (-0.001994) | 0.004848 / 0.011008 (-0.006160) | 0.099471 / 0.038508 (0.060963) | 0.079483 / 0.023109 (0.056373) | 0.375281 / 0.275898 (0.099383) | 0.415566 / 0.323480 (0.092086) | 0.006317 / 0.007986 (-0.001669) | 0.005145 / 0.004328 (0.000817) | 0.080345 / 0.004250 (0.076094) | 0.064540 / 0.037052 (0.027487) | 0.385897 / 0.258489 (0.127408) | 0.432576 / 0.293841 (0.138735) | 0.055109 / 0.128546 (-0.073437) | 0.014166 / 0.075646 (-0.061480) | 0.350870 / 0.419271 (-0.068402) | 0.087483 / 0.043533 (0.043950) | 0.402288 / 0.255139 (0.147149) | 0.391997 / 0.283200 (0.108798) | 0.045233 / 0.141683 (-0.096450) | 1.795002 / 1.452155 (0.342847) | 1.839063 / 1.492716 (0.346347) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220851 / 0.018006 (0.202845) | 0.513391 / 0.000490 (0.512901) | 0.003740 / 0.000200 (0.003540) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035287 / 0.037411 (-0.002124) | 0.090670 / 0.014526 (0.076144) | 0.115651 / 0.176557 (-0.060905) | 0.180469 / 0.737135 (-0.556667) | 0.106955 / 0.296338 (-0.189384) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.632381 / 0.215209 (0.417172) | 6.185151 / 2.077655 (4.107497) | 2.548263 / 1.504120 (1.044143) | 2.194931 / 1.541195 (0.653737) | 2.368685 / 1.468490 (0.900194) | 0.956467 / 4.584777 (-3.628310) | 5.280904 / 3.745712 (1.535192) | 4.783057 / 5.269862 (-0.486805) | 3.218493 / 4.565676 (-1.347184) | 0.103545 / 0.424275 (-0.320730) | 0.008424 / 0.007607 (0.000817) | 0.736303 / 0.226044 (0.510259) | 7.354305 / 2.268929 (5.085376) | 3.280670 / 55.444624 (-52.163954) | 2.478628 / 6.876477 (-4.397848) | 2.623290 / 2.142072 (0.481217) | 1.033064 / 4.805227 (-3.772163) | 0.206496 / 6.500664 (-6.294168) | 0.066449 / 0.075469 (-0.009020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.508756 / 1.841788 (-0.333031) | 21.866012 / 8.074308 (13.791704) | 21.887761 / 10.191392 (11.696369) | 0.231415 / 0.680424 (-0.449008) | 0.028917 / 0.534201 (-0.505284) | 0.468761 / 0.579283 (-0.110522) | 0.568236 / 0.434364 (0.133872) | 0.550156 / 0.540337 (0.009818) | 0.783197 / 1.386936 (-0.603739) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009413 / 0.011353 (-0.001939) | 0.004951 / 0.011008 (-0.006058) | 0.071402 / 0.038508 (0.032893) | 0.068455 / 0.023109 (0.045346) | 0.425216 / 0.275898 (0.149318) | 0.431928 / 0.323480 (0.108448) | 0.006477 / 0.007986 (-0.001509) | 0.003891 / 0.004328 (-0.000437) | 0.076898 / 0.004250 (0.072647) | 0.057522 / 0.037052 (0.020470) | 0.449585 / 0.258489 (0.191096) | 0.431356 / 0.293841 (0.137515) | 0.049728 / 0.128546 (-0.078818) | 0.014456 / 0.075646 (-0.061190) | 0.084618 / 0.419271 (-0.334653) | 0.064482 / 0.043533 (0.020949) | 0.456377 / 0.255139 (0.201238) | 0.433949 / 0.283200 (0.150749) | 0.036577 / 0.141683 (-0.105106) | 1.819742 / 1.452155 (0.367588) | 1.694691 / 1.492716 (0.201975) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224610 / 0.018006 (0.206604) | 0.494586 / 0.000490 (0.494096) | 0.004506 / 0.000200 (0.004307) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033172 / 0.037411 (-0.004239) | 0.100562 / 0.014526 (0.086036) | 0.116499 / 0.176557 (-0.060058) | 0.153717 / 0.737135 (-0.583418) | 0.140047 / 0.296338 (-0.156291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635922 / 0.215209 (0.420713) | 6.359792 / 2.077655 (4.282137) | 2.689083 / 1.504120 (1.184963) | 2.330574 / 1.541195 (0.789380) | 2.583535 / 1.468490 (1.115044) | 0.902737 / 4.584777 (-3.682040) | 5.136586 / 3.745712 (1.390874) | 4.570824 / 5.269862 (-0.699037) | 3.029953 / 4.565676 (-1.535724) | 0.103961 / 0.424275 (-0.320314) | 0.007908 / 0.007607 (0.000301) | 0.723290 / 0.226044 (0.497246) | 7.678599 / 2.268929 (5.409671) | 3.342522 / 55.444624 (-52.102102) | 2.774659 / 6.876477 (-4.101817) | 2.966496 / 2.142072 (0.824423) | 1.025395 / 4.805227 (-3.779832) | 0.222246 / 6.500664 (-6.278418) | 0.072455 / 0.075469 (-0.003014) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.603637 / 1.841788 (-0.238151) | 21.387722 / 8.074308 (13.313414) | 22.855221 / 10.191392 (12.663829) | 0.222147 / 0.680424 (-0.458277) | 0.030763 / 0.534201 (-0.503438) | 0.472586 / 0.579283 (-0.106697) | 0.560161 / 0.434364 (0.125797) | 0.551941 / 0.540337 (0.011604) | 0.711254 / 1.386936 (-0.675682) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#85cf123e553ff282b43ad1d1877ba2c40d206d52 \"CML watermark\")\n" ]
2023-07-17T15:50:15
2023-07-24T14:45:56
2023-07-24T14:35:03
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6045", "html_url": "https://github.com/huggingface/datasets/pull/6045", "diff_url": "https://github.com/huggingface/datasets/pull/6045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6045.patch", "merged_at": "2023-07-24T14:35:03" }
Fix #6039
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6045/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6044/comments
https://api.github.com/repos/huggingface/datasets/issues/6044/events
https://github.com/huggingface/datasets/pull/6044
1,808,057,906
PR_kwDODunzps5Vr7jr
6,044
Rename "pattern" to "path" in YAML data_files configs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004809) | 0.004085 / 0.011008 (-0.006924) | 0.083989 / 0.038508 (0.045481) | 0.074733 / 0.023109 (0.051623) | 0.310839 / 0.275898 (0.034941) | 0.333540 / 0.323480 (0.010060) | 0.005566 / 0.007986 (-0.002419) | 0.003461 / 0.004328 (-0.000868) | 0.065194 / 0.004250 (0.060943) | 0.057007 / 0.037052 (0.019954) | 0.325633 / 0.258489 (0.067144) | 0.351665 / 0.293841 (0.057824) | 0.030561 / 0.128546 (-0.097985) | 0.008579 / 0.075646 (-0.067068) | 0.287457 / 0.419271 (-0.131815) | 0.063554 / 0.043533 (0.020021) | 0.309182 / 0.255139 (0.054043) | 0.327809 / 0.283200 (0.044609) | 0.034470 / 0.141683 (-0.107213) | 1.452098 / 1.452155 (-0.000057) | 1.527130 / 1.492716 (0.034414) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241736 / 0.018006 (0.223729) | 0.552432 / 0.000490 (0.551943) | 0.004085 / 0.000200 (0.003885) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027290 / 0.037411 (-0.010121) | 0.081250 / 0.014526 (0.066724) | 0.094739 / 0.176557 (-0.081818) | 0.150424 / 0.737135 (-0.586711) | 0.095488 / 0.296338 (-0.200851) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.377245 / 0.215209 (0.162036) | 3.781021 / 2.077655 (1.703366) | 1.820092 / 1.504120 (0.315972) | 1.654420 / 1.541195 (0.113225) | 1.751256 / 1.468490 (0.282766) | 0.475161 / 4.584777 (-4.109616) | 3.603462 / 3.745712 (-0.142251) | 5.437837 / 5.269862 (0.167975) | 3.305598 / 4.565676 (-1.260079) | 0.055856 / 0.424275 (-0.368419) | 0.007259 / 0.007607 (-0.000348) | 0.454205 / 0.226044 (0.228161) | 4.544157 / 2.268929 (2.275229) | 2.296776 / 55.444624 (-53.147848) | 1.951017 / 6.876477 (-4.925459) | 2.128759 / 2.142072 (-0.013313) | 0.590354 / 4.805227 (-4.214873) | 0.129974 / 6.500664 (-6.370690) | 0.059506 / 0.075469 (-0.015963) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285866 / 1.841788 (-0.555921) | 19.419446 / 8.074308 (11.345138) | 13.985108 / 10.191392 (3.793716) | 0.146803 / 0.680424 (-0.533620) | 0.018176 / 0.534201 (-0.516025) | 0.392345 / 0.579283 (-0.186938) | 0.405394 / 0.434364 (-0.028970) | 0.454649 / 0.540337 (-0.085688) | 0.633075 / 1.386936 (-0.753861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006497 / 0.011353 (-0.004855) | 0.004092 / 0.011008 (-0.006916) | 0.064908 / 0.038508 (0.026400) | 0.073494 / 0.023109 (0.050385) | 0.382227 / 0.275898 (0.106329) | 0.407320 / 0.323480 (0.083840) | 0.005653 / 0.007986 (-0.002332) | 0.003500 / 0.004328 (-0.000829) | 0.064570 / 0.004250 (0.060320) | 0.058733 / 0.037052 (0.021681) | 0.385702 / 0.258489 (0.127213) | 0.426463 / 0.293841 (0.132622) | 0.031073 / 0.128546 (-0.097473) | 0.008710 / 0.075646 (-0.066936) | 0.071378 / 0.419271 (-0.347893) | 0.050141 / 0.043533 (0.006608) | 0.377769 / 0.255139 (0.122630) | 0.395016 / 0.283200 (0.111816) | 0.025158 / 0.141683 (-0.116525) | 1.470503 / 1.452155 (0.018348) | 1.532742 / 1.492716 (0.040026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214249 / 0.018006 (0.196243) | 0.583580 / 0.000490 (0.583090) | 0.004027 / 0.000200 (0.003828) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030186 / 0.037411 (-0.007226) | 0.086927 / 0.014526 (0.072401) | 0.102060 / 0.176557 (-0.074497) | 0.156281 / 0.737135 (-0.580855) | 0.100825 / 0.296338 (-0.195514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419942 / 0.215209 (0.204733) | 4.183797 / 2.077655 (2.106142) | 2.205079 / 1.504120 (0.700959) | 2.071219 / 1.541195 (0.530024) | 2.194047 / 1.468490 (0.725557) | 0.478768 / 4.584777 (-4.106009) | 3.584864 / 3.745712 (-0.160848) | 3.371635 / 5.269862 (-1.898227) | 2.022134 / 4.565676 (-2.543542) | 0.056553 / 0.424275 (-0.367722) | 0.007231 / 0.007607 (-0.000376) | 0.493158 / 0.226044 (0.267113) | 4.934370 / 2.268929 (2.665441) | 2.699593 / 55.444624 (-52.745031) | 2.396371 / 6.876477 (-4.480105) | 2.438052 / 2.142072 (0.295979) | 0.589578 / 4.805227 (-4.215649) | 0.147234 / 6.500664 (-6.353430) | 0.062049 / 0.075469 (-0.013420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318246 / 1.841788 (-0.523542) | 19.829025 / 8.074308 (11.754717) | 14.314825 / 10.191392 (4.123433) | 0.168309 / 0.680424 (-0.512115) | 0.018596 / 0.534201 (-0.515605) | 0.397540 / 0.579283 (-0.181743) | 0.421280 / 0.434364 (-0.013084) | 0.479917 / 0.540337 (-0.060421) | 0.643494 / 1.386936 (-0.743442) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5be59becaa65f1fa08129091b8c778823e4a50ac \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008349 / 0.011353 (-0.003004) | 0.005362 / 0.011008 (-0.005646) | 0.100777 / 0.038508 (0.062269) | 0.078719 / 0.023109 (0.055609) | 0.398105 / 0.275898 (0.122207) | 0.444189 / 0.323480 (0.120709) | 0.006834 / 0.007986 (-0.001152) | 0.004642 / 0.004328 (0.000314) | 0.076284 / 0.004250 (0.072034) | 0.062738 / 0.037052 (0.025685) | 0.409532 / 0.258489 (0.151043) | 0.447218 / 0.293841 (0.153377) | 0.052996 / 0.128546 (-0.075550) | 0.012977 / 0.075646 (-0.062669) | 0.347687 / 0.419271 (-0.071585) | 0.068076 / 0.043533 (0.024543) | 0.394526 / 0.255139 (0.139387) | 0.434110 / 0.283200 (0.150910) | 0.041719 / 0.141683 (-0.099963) | 1.759109 / 1.452155 (0.306955) | 1.866049 / 1.492716 (0.373333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287633 / 0.018006 (0.269627) | 0.611540 / 0.000490 (0.611051) | 0.005388 / 0.000200 (0.005188) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027394 / 0.037411 (-0.010017) | 0.089796 / 0.014526 (0.075270) | 0.106931 / 0.176557 (-0.069625) | 0.173560 / 0.737135 (-0.563575) | 0.106948 / 0.296338 (-0.189391) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575156 / 0.215209 (0.359947) | 5.674170 / 2.077655 (3.596516) | 2.463090 / 1.504120 (0.958971) | 2.128245 / 1.541195 (0.587050) | 2.118982 / 1.468490 (0.650492) | 0.876976 / 4.584777 (-3.707801) | 5.238229 / 3.745712 (1.492517) | 4.548788 / 5.269862 (-0.721074) | 2.905243 / 4.565676 (-1.660433) | 0.090750 / 0.424275 (-0.333525) | 0.008266 / 0.007607 (0.000659) | 0.693305 / 0.226044 (0.467260) | 7.126970 / 2.268929 (4.858041) | 3.152131 / 55.444624 (-52.292494) | 2.532118 / 6.876477 (-4.344359) | 2.678442 / 2.142072 (0.536369) | 0.932745 / 4.805227 (-3.872483) | 0.196290 / 6.500664 (-6.304374) | 0.074082 / 0.075469 (-0.001387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599636 / 1.841788 (-0.242152) | 23.271435 / 8.074308 (15.197127) | 19.696709 / 10.191392 (9.505317) | 0.222668 / 0.680424 (-0.457756) | 0.029088 / 0.534201 (-0.505113) | 0.492477 / 0.579283 (-0.086806) | 0.580578 / 0.434364 (0.146214) | 0.558852 / 0.540337 (0.018514) | 0.762083 / 1.386936 (-0.624853) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009021 / 0.011353 (-0.002332) | 0.005011 / 0.011008 (-0.005997) | 0.076504 / 0.038508 (0.037996) | 0.077303 / 0.023109 (0.054193) | 0.480660 / 0.275898 (0.204762) | 0.493944 / 0.323480 (0.170464) | 0.006339 / 0.007986 (-0.001646) | 0.004302 / 0.004328 (-0.000026) | 0.076228 / 0.004250 (0.071978) | 0.060805 / 0.037052 (0.023753) | 0.477539 / 0.258489 (0.219050) | 0.496799 / 0.293841 (0.202958) | 0.049495 / 0.128546 (-0.079052) | 0.013333 / 0.075646 (-0.062313) | 0.087217 / 0.419271 (-0.332055) | 0.061451 / 0.043533 (0.017918) | 0.485169 / 0.255139 (0.230030) | 0.487348 / 0.283200 (0.204149) | 0.035874 / 0.141683 (-0.105809) | 1.829137 / 1.452155 (0.376982) | 1.906151 / 1.492716 (0.413435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304526 / 0.018006 (0.286520) | 0.627499 / 0.000490 (0.627009) | 0.003786 / 0.000200 (0.003586) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035512 / 0.037411 (-0.001899) | 0.096684 / 0.014526 (0.082158) | 0.111879 / 0.176557 (-0.064678) | 0.171489 / 0.737135 (-0.565647) | 0.112175 / 0.296338 (-0.184164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.604791 / 0.215209 (0.389582) | 6.089137 / 2.077655 (4.011482) | 2.883237 / 1.504120 (1.379117) | 2.561109 / 1.541195 (1.019914) | 2.542400 / 1.468490 (1.073910) | 0.852828 / 4.584777 (-3.731949) | 5.236812 / 3.745712 (1.491100) | 4.756429 / 5.269862 (-0.513432) | 2.885660 / 4.565676 (-1.680016) | 0.095643 / 0.424275 (-0.328632) | 0.008403 / 0.007607 (0.000796) | 0.727707 / 0.226044 (0.501663) | 7.428002 / 2.268929 (5.159074) | 3.816051 / 55.444624 (-51.628573) | 2.971057 / 6.876477 (-3.905420) | 2.915965 / 2.142072 (0.773893) | 1.006553 / 4.805227 (-3.798674) | 0.201840 / 6.500664 (-6.298824) | 0.080795 / 0.075469 (0.005326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.794951 / 1.841788 (-0.046837) | 23.624556 / 8.074308 (15.550248) | 21.856195 / 10.191392 (11.664802) | 0.253043 / 0.680424 (-0.427381) | 0.031201 / 0.534201 (-0.503000) | 0.461641 / 0.579283 (-0.117642) | 0.577789 / 0.434364 (0.143425) | 0.569197 / 0.540337 (0.028860) | 0.780111 / 1.386936 (-0.606825) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4904f14459c862f0ab525ec034a636177be5dee4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007646 / 0.011353 (-0.003707) | 0.004750 / 0.011008 (-0.006258) | 0.097981 / 0.038508 (0.059473) | 0.088989 / 0.023109 (0.065880) | 0.377732 / 0.275898 (0.101834) | 0.406805 / 0.323480 (0.083325) | 0.006389 / 0.007986 (-0.001597) | 0.003854 / 0.004328 (-0.000474) | 0.073977 / 0.004250 (0.069727) | 0.066497 / 0.037052 (0.029444) | 0.371498 / 0.258489 (0.113009) | 0.417352 / 0.293841 (0.123511) | 0.036326 / 0.128546 (-0.092220) | 0.009876 / 0.075646 (-0.065770) | 0.330142 / 0.419271 (-0.089130) | 0.062423 / 0.043533 (0.018890) | 0.369375 / 0.255139 (0.114236) | 0.406048 / 0.283200 (0.122848) | 0.026564 / 0.141683 (-0.115119) | 1.713295 / 1.452155 (0.261140) | 1.797493 / 1.492716 (0.304777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231889 / 0.018006 (0.213882) | 0.512497 / 0.000490 (0.512007) | 0.000390 / 0.000200 (0.000190) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033978 / 0.037411 (-0.003433) | 0.100117 / 0.014526 (0.085592) | 0.112460 / 0.176557 (-0.064097) | 0.179936 / 0.737135 (-0.557200) | 0.114277 / 0.296338 (-0.182061) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461320 / 0.215209 (0.246111) | 4.563180 / 2.077655 (2.485526) | 2.249474 / 1.504120 (0.745354) | 2.100450 / 1.541195 (0.559255) | 2.231080 / 1.468490 (0.762590) | 0.567907 / 4.584777 (-4.016870) | 4.117233 / 3.745712 (0.371521) | 4.943159 / 5.269862 (-0.326703) | 3.112299 / 4.565676 (-1.453377) | 0.065500 / 0.424275 (-0.358775) | 0.008407 / 0.007607 (0.000800) | 0.545928 / 0.226044 (0.319883) | 5.508058 / 2.268929 (3.239129) | 2.834645 / 55.444624 (-52.609980) | 2.440328 / 6.876477 (-4.436148) | 2.680483 / 2.142072 (0.538410) | 0.697191 / 4.805227 (-4.108036) | 0.176646 / 6.500664 (-6.324018) | 0.073608 / 0.075469 (-0.001861) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.451865 / 1.841788 (-0.389922) | 22.752595 / 8.074308 (14.678287) | 15.543338 / 10.191392 (5.351946) | 0.214644 / 0.680424 (-0.465780) | 0.022050 / 0.534201 (-0.512151) | 0.463898 / 0.579283 (-0.115385) | 0.481691 / 0.434364 (0.047327) | 0.549715 / 0.540337 (0.009378) | 0.773595 / 1.386936 (-0.613341) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007541 / 0.011353 (-0.003812) | 0.004715 / 0.011008 (-0.006293) | 0.076782 / 0.038508 (0.038274) | 0.086242 / 0.023109 (0.063133) | 0.458053 / 0.275898 (0.182155) | 0.503097 / 0.323480 (0.179617) | 0.006262 / 0.007986 (-0.001724) | 0.003882 / 0.004328 (-0.000447) | 0.075669 / 0.004250 (0.071419) | 0.066004 / 0.037052 (0.028952) | 0.469439 / 0.258489 (0.210950) | 0.529744 / 0.293841 (0.235903) | 0.037228 / 0.128546 (-0.091319) | 0.009794 / 0.075646 (-0.065852) | 0.082464 / 0.419271 (-0.336808) | 0.058797 / 0.043533 (0.015264) | 0.452069 / 0.255139 (0.196930) | 0.488246 / 0.283200 (0.205046) | 0.029324 / 0.141683 (-0.112359) | 1.742237 / 1.452155 (0.290082) | 1.839676 / 1.492716 (0.346959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228106 / 0.018006 (0.210100) | 0.491632 / 0.000490 (0.491142) | 0.004993 / 0.000200 (0.004793) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035413 / 0.037411 (-0.001999) | 0.104617 / 0.014526 (0.090091) | 0.121948 / 0.176557 (-0.054609) | 0.186233 / 0.737135 (-0.550902) | 0.121574 / 0.296338 (-0.174764) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473849 / 0.215209 (0.258640) | 4.788312 / 2.077655 (2.710657) | 2.470535 / 1.504120 (0.966415) | 2.270393 / 1.541195 (0.729198) | 2.361096 / 1.468490 (0.892606) | 0.556184 / 4.584777 (-4.028593) | 4.216852 / 3.745712 (0.471140) | 3.901718 / 5.269862 (-1.368143) | 2.355209 / 4.565676 (-2.210467) | 0.066708 / 0.424275 (-0.357567) | 0.008709 / 0.007607 (0.001102) | 0.571714 / 0.226044 (0.345669) | 5.663150 / 2.268929 (3.394221) | 3.025769 / 55.444624 (-52.418855) | 2.652554 / 6.876477 (-4.223923) | 2.750555 / 2.142072 (0.608483) | 0.681536 / 4.805227 (-4.123691) | 0.157187 / 6.500664 (-6.343477) | 0.073533 / 0.075469 (-0.001936) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.604630 / 1.841788 (-0.237158) | 22.735629 / 8.074308 (14.661321) | 16.762347 / 10.191392 (6.570955) | 0.175514 / 0.680424 (-0.504910) | 0.021497 / 0.534201 (-0.512704) | 0.461438 / 0.579283 (-0.117845) | 0.476184 / 0.434364 (0.041820) | 0.571048 / 0.540337 (0.030710) | 0.747086 / 1.386936 (-0.639850) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ea38fc40ee2b10d3b5c6df09b09ad05e02a2cff \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006889 / 0.011353 (-0.004464) | 0.004241 / 0.011008 (-0.006767) | 0.084542 / 0.038508 (0.046034) | 0.080484 / 0.023109 (0.057374) | 0.309356 / 0.275898 (0.033458) | 0.338548 / 0.323480 (0.015068) | 0.004904 / 0.007986 (-0.003082) | 0.005220 / 0.004328 (0.000892) | 0.065501 / 0.004250 (0.061251) | 0.062095 / 0.037052 (0.025043) | 0.317332 / 0.258489 (0.058843) | 0.364797 / 0.293841 (0.070956) | 0.030492 / 0.128546 (-0.098054) | 0.008991 / 0.075646 (-0.066656) | 0.288274 / 0.419271 (-0.130998) | 0.052582 / 0.043533 (0.009049) | 0.310838 / 0.255139 (0.055699) | 0.346304 / 0.283200 (0.063104) | 0.027968 / 0.141683 (-0.113715) | 1.509727 / 1.452155 (0.057573) | 1.577410 / 1.492716 (0.084694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269725 / 0.018006 (0.251719) | 0.627685 / 0.000490 (0.627195) | 0.000419 / 0.000200 (0.000219) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031022 / 0.037411 (-0.006389) | 0.081858 / 0.014526 (0.067332) | 0.099477 / 0.176557 (-0.077080) | 0.162981 / 0.737135 (-0.574154) | 0.101987 / 0.296338 (-0.194351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386297 / 0.215209 (0.171088) | 3.845321 / 2.077655 (1.767666) | 1.834446 / 1.504120 (0.330326) | 1.699730 / 1.541195 (0.158536) | 1.764342 / 1.468490 (0.295852) | 0.486423 / 4.584777 (-4.098354) | 3.527595 / 3.745712 (-0.218117) | 4.137034 / 5.269862 (-1.132827) | 2.590457 / 4.565676 (-1.975219) | 0.057598 / 0.424275 (-0.366677) | 0.007318 / 0.007607 (-0.000289) | 0.460775 / 0.226044 (0.234730) | 4.627576 / 2.268929 (2.358647) | 2.402566 / 55.444624 (-53.042059) | 2.011392 / 6.876477 (-4.865085) | 2.223915 / 2.142072 (0.081842) | 0.623217 / 4.805227 (-4.182011) | 0.148875 / 6.500664 (-6.351789) | 0.059799 / 0.075469 (-0.015671) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290768 / 1.841788 (-0.551020) | 20.455083 / 8.074308 (12.380775) | 13.469846 / 10.191392 (3.278454) | 0.170329 / 0.680424 (-0.510095) | 0.018409 / 0.534201 (-0.515792) | 0.394356 / 0.579283 (-0.184927) | 0.422685 / 0.434364 (-0.011679) | 0.476241 / 0.540337 (-0.064096) | 0.662682 / 1.386936 (-0.724254) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006724 / 0.011353 (-0.004629) | 0.004508 / 0.011008 (-0.006500) | 0.065304 / 0.038508 (0.026796) | 0.080243 / 0.023109 (0.057133) | 0.384545 / 0.275898 (0.108647) | 0.415234 / 0.323480 (0.091754) | 0.006361 / 0.007986 (-0.001624) | 0.004193 / 0.004328 (-0.000135) | 0.065940 / 0.004250 (0.061689) | 0.063633 / 0.037052 (0.026581) | 0.392799 / 0.258489 (0.134310) | 0.443618 / 0.293841 (0.149777) | 0.031134 / 0.128546 (-0.097412) | 0.009058 / 0.075646 (-0.066588) | 0.071051 / 0.419271 (-0.348221) | 0.049096 / 0.043533 (0.005563) | 0.379526 / 0.255139 (0.124387) | 0.403370 / 0.283200 (0.120171) | 0.026378 / 0.141683 (-0.115305) | 1.457879 / 1.452155 (0.005724) | 1.562890 / 1.492716 (0.070174) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304416 / 0.018006 (0.286410) | 0.626046 / 0.000490 (0.625557) | 0.000469 / 0.000200 (0.000269) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032979 / 0.037411 (-0.004433) | 0.086769 / 0.014526 (0.072243) | 0.108188 / 0.176557 (-0.068369) | 0.163077 / 0.737135 (-0.574058) | 0.106276 / 0.296338 (-0.190062) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406922 / 0.215209 (0.191713) | 4.052828 / 2.077655 (1.975174) | 2.084802 / 1.504120 (0.580682) | 1.927263 / 1.541195 (0.386069) | 1.956078 / 1.468490 (0.487587) | 0.480110 / 4.584777 (-4.104667) | 3.553022 / 3.745712 (-0.192691) | 3.554450 / 5.269862 (-1.715411) | 2.082681 / 4.565676 (-2.482995) | 0.056711 / 0.424275 (-0.367564) | 0.007374 / 0.007607 (-0.000234) | 0.480555 / 0.226044 (0.254510) | 4.795851 / 2.268929 (2.526923) | 2.606675 / 55.444624 (-52.837949) | 2.249964 / 6.876477 (-4.626512) | 2.274234 / 2.142072 (0.132162) | 0.571767 / 4.805227 (-4.233461) | 0.133312 / 6.500664 (-6.367352) | 0.061703 / 0.075469 (-0.013766) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354308 / 1.841788 (-0.487479) | 20.959352 / 8.074308 (12.885044) | 14.158420 / 10.191392 (3.967028) | 0.197959 / 0.680424 (-0.482465) | 0.018412 / 0.534201 (-0.515789) | 0.394307 / 0.579283 (-0.184976) | 0.402455 / 0.434364 (-0.031909) | 0.463314 / 0.540337 (-0.077024) | 0.621050 / 1.386936 (-0.765886) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7298d4d1b169442a8d0bc8c1667298bb89ca501 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007179 / 0.011353 (-0.004174) | 0.004318 / 0.011008 (-0.006690) | 0.085209 / 0.038508 (0.046701) | 0.089989 / 0.023109 (0.066880) | 0.328188 / 0.275898 (0.052290) | 0.346027 / 0.323480 (0.022547) | 0.005711 / 0.007986 (-0.002275) | 0.003703 / 0.004328 (-0.000625) | 0.065419 / 0.004250 (0.061169) | 0.065354 / 0.037052 (0.028301) | 0.314531 / 0.258489 (0.056042) | 0.354357 / 0.293841 (0.060516) | 0.030918 / 0.128546 (-0.097628) | 0.008632 / 0.075646 (-0.067015) | 0.286817 / 0.419271 (-0.132455) | 0.065267 / 0.043533 (0.021735) | 0.310918 / 0.255139 (0.055779) | 0.330497 / 0.283200 (0.047298) | 0.035695 / 0.141683 (-0.105988) | 1.471101 / 1.452155 (0.018947) | 1.538658 / 1.492716 (0.045942) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254314 / 0.018006 (0.236308) | 0.591413 / 0.000490 (0.590923) | 0.006082 / 0.000200 (0.005882) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031843 / 0.037411 (-0.005568) | 0.089968 / 0.014526 (0.075442) | 0.101838 / 0.176557 (-0.074718) | 0.164401 / 0.737135 (-0.572734) | 0.103785 / 0.296338 (-0.192554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.380486 / 0.215209 (0.165277) | 3.798868 / 2.077655 (1.721213) | 1.824645 / 1.504120 (0.320525) | 1.660804 / 1.541195 (0.119610) | 1.784793 / 1.468490 (0.316303) | 0.487222 / 4.584777 (-4.097555) | 3.560580 / 3.745712 (-0.185132) | 5.392662 / 5.269862 (0.122800) | 3.295327 / 4.565676 (-1.270350) | 0.057699 / 0.424275 (-0.366576) | 0.007559 / 0.007607 (-0.000048) | 0.459655 / 0.226044 (0.233611) | 4.587583 / 2.268929 (2.318654) | 2.304845 / 55.444624 (-53.139779) | 1.966433 / 6.876477 (-4.910044) | 2.254591 / 2.142072 (0.112519) | 0.582978 / 4.805227 (-4.222250) | 0.133455 / 6.500664 (-6.367210) | 0.061924 / 0.075469 (-0.013546) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275685 / 1.841788 (-0.566103) | 20.814545 / 8.074308 (12.740237) | 13.753567 / 10.191392 (3.562175) | 0.164076 / 0.680424 (-0.516348) | 0.018768 / 0.534201 (-0.515433) | 0.390991 / 0.579283 (-0.188293) | 0.404417 / 0.434364 (-0.029947) | 0.457522 / 0.540337 (-0.082815) | 0.624654 / 1.386936 (-0.762282) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007435 / 0.011353 (-0.003918) | 0.004255 / 0.011008 (-0.006754) | 0.066134 / 0.038508 (0.027626) | 0.086035 / 0.023109 (0.062925) | 0.364688 / 0.275898 (0.088790) | 0.403895 / 0.323480 (0.080415) | 0.005868 / 0.007986 (-0.002117) | 0.003634 / 0.004328 (-0.000694) | 0.065803 / 0.004250 (0.061553) | 0.065113 / 0.037052 (0.028061) | 0.370057 / 0.258489 (0.111568) | 0.412634 / 0.293841 (0.118793) | 0.031660 / 0.128546 (-0.096886) | 0.008699 / 0.075646 (-0.066947) | 0.070618 / 0.419271 (-0.348654) | 0.050814 / 0.043533 (0.007281) | 0.362320 / 0.255139 (0.107181) | 0.383863 / 0.283200 (0.100663) | 0.027980 / 0.141683 (-0.113703) | 1.486389 / 1.452155 (0.034234) | 1.595534 / 1.492716 (0.102817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300991 / 0.018006 (0.282985) | 0.565265 / 0.000490 (0.564775) | 0.000400 / 0.000200 (0.000200) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034942 / 0.037411 (-0.002470) | 0.092498 / 0.014526 (0.077972) | 0.106737 / 0.176557 (-0.069819) | 0.165400 / 0.737135 (-0.571735) | 0.107809 / 0.296338 (-0.188529) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412156 / 0.215209 (0.196947) | 4.116747 / 2.077655 (2.039092) | 2.199612 / 1.504120 (0.695492) | 2.049310 / 1.541195 (0.508115) | 2.174342 / 1.468490 (0.705852) | 0.482794 / 4.584777 (-4.101983) | 3.561344 / 3.745712 (-0.184368) | 3.465935 / 5.269862 (-1.803926) | 2.076595 / 4.565676 (-2.489081) | 0.056242 / 0.424275 (-0.368033) | 0.007371 / 0.007607 (-0.000236) | 0.489135 / 0.226044 (0.263091) | 4.895691 / 2.268929 (2.626763) | 2.626936 / 55.444624 (-52.817688) | 2.306658 / 6.876477 (-4.569818) | 2.421705 / 2.142072 (0.279633) | 0.599547 / 4.805227 (-4.205680) | 0.133627 / 6.500664 (-6.367037) | 0.063830 / 0.075469 (-0.011639) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.383039 / 1.841788 (-0.458748) | 21.005346 / 8.074308 (12.931038) | 14.911083 / 10.191392 (4.719691) | 0.190995 / 0.680424 (-0.489429) | 0.018510 / 0.534201 (-0.515691) | 0.396346 / 0.579283 (-0.182937) | 0.411496 / 0.434364 (-0.022868) | 0.470972 / 0.540337 (-0.069366) | 0.615670 / 1.386936 (-0.771266) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d6d2ba47759d8acbf3d750b1cc4d89b195b1f9c9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.004261 / 0.011008 (-0.006747) | 0.100645 / 0.038508 (0.062137) | 0.078522 / 0.023109 (0.055413) | 0.423526 / 0.275898 (0.147628) | 0.439541 / 0.323480 (0.116061) | 0.005812 / 0.007986 (-0.002173) | 0.003615 / 0.004328 (-0.000713) | 0.075908 / 0.004250 (0.071658) | 0.062490 / 0.037052 (0.025437) | 0.414941 / 0.258489 (0.156452) | 0.447267 / 0.293841 (0.153426) | 0.035127 / 0.128546 (-0.093419) | 0.009642 / 0.075646 (-0.066004) | 0.354093 / 0.419271 (-0.065179) | 0.060970 / 0.043533 (0.017437) | 0.418579 / 0.255139 (0.163440) | 0.427972 / 0.283200 (0.144772) | 0.025838 / 0.141683 (-0.115845) | 1.778349 / 1.452155 (0.326194) | 1.845965 / 1.492716 (0.353249) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227304 / 0.018006 (0.209298) | 0.571833 / 0.000490 (0.571343) | 0.001328 / 0.000200 (0.001128) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031343 / 0.037411 (-0.006068) | 0.096400 / 0.014526 (0.081875) | 0.106881 / 0.176557 (-0.069676) | 0.175449 / 0.737135 (-0.561686) | 0.108751 / 0.296338 (-0.187588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480204 / 0.215209 (0.264995) | 4.622063 / 2.077655 (2.544408) | 2.211505 / 1.504120 (0.707385) | 2.065154 / 1.541195 (0.523959) | 2.159446 / 1.468490 (0.690956) | 0.584571 / 4.584777 (-4.000206) | 4.392449 / 3.745712 (0.646737) | 4.790166 / 5.269862 (-0.479695) | 2.840615 / 4.565676 (-1.725062) | 0.070845 / 0.424275 (-0.353430) | 0.009112 / 0.007607 (0.001505) | 0.580251 / 0.226044 (0.354207) | 5.660311 / 2.268929 (3.391382) | 2.836136 / 55.444624 (-52.608489) | 2.412859 / 6.876477 (-4.463618) | 2.556710 / 2.142072 (0.414637) | 0.691946 / 4.805227 (-4.113282) | 0.160123 / 6.500664 (-6.340541) | 0.072593 / 0.075469 (-0.002876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547339 / 1.841788 (-0.294448) | 21.724793 / 8.074308 (13.650485) | 16.315304 / 10.191392 (6.123912) | 0.188733 / 0.680424 (-0.491690) | 0.022109 / 0.534201 (-0.512092) | 0.481623 / 0.579283 (-0.097660) | 0.464316 / 0.434364 (0.029952) | 0.557953 / 0.540337 (0.017615) | 0.756023 / 1.386936 (-0.630913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008637 / 0.011353 (-0.002716) | 0.005286 / 0.011008 (-0.005723) | 0.091387 / 0.038508 (0.052879) | 0.114092 / 0.023109 (0.090983) | 0.457547 / 0.275898 (0.181649) | 0.506878 / 0.323480 (0.183398) | 0.006849 / 0.007986 (-0.001137) | 0.004255 / 0.004328 (-0.000073) | 0.079556 / 0.004250 (0.075306) | 0.077729 / 0.037052 (0.040677) | 0.454094 / 0.258489 (0.195605) | 0.515812 / 0.293841 (0.221971) | 0.038271 / 0.128546 (-0.090275) | 0.010110 / 0.075646 (-0.065536) | 0.094254 / 0.419271 (-0.325017) | 0.065392 / 0.043533 (0.021860) | 0.459749 / 0.255139 (0.204610) | 0.489829 / 0.283200 (0.206629) | 0.040393 / 0.141683 (-0.101290) | 1.810414 / 1.452155 (0.358259) | 1.913212 / 1.492716 (0.420496) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236898 / 0.018006 (0.218891) | 0.513118 / 0.000490 (0.512628) | 0.004432 / 0.000200 (0.004232) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035074 / 0.037411 (-0.002337) | 0.102384 / 0.014526 (0.087858) | 0.117326 / 0.176557 (-0.059231) | 0.182596 / 0.737135 (-0.554539) | 0.116384 / 0.296338 (-0.179955) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514544 / 0.215209 (0.299335) | 5.152930 / 2.077655 (3.075275) | 2.624477 / 1.504120 (1.120357) | 2.363209 / 1.541195 (0.822014) | 2.436060 / 1.468490 (0.967570) | 0.592523 / 4.584777 (-3.992254) | 4.209668 / 3.745712 (0.463956) | 6.284372 / 5.269862 (1.014511) | 3.667303 / 4.565676 (-0.898374) | 0.067017 / 0.424275 (-0.357259) | 0.008607 / 0.007607 (0.001000) | 0.600840 / 0.226044 (0.374796) | 5.992630 / 2.268929 (3.723701) | 3.114532 / 55.444624 (-52.330093) | 2.693242 / 6.876477 (-4.183235) | 2.767187 / 2.142072 (0.625115) | 0.687591 / 4.805227 (-4.117636) | 0.158477 / 6.500664 (-6.342187) | 0.075504 / 0.075469 (0.000034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.605039 / 1.841788 (-0.236749) | 21.524730 / 8.074308 (13.450422) | 17.014643 / 10.191392 (6.823251) | 0.201580 / 0.680424 (-0.478843) | 0.023028 / 0.534201 (-0.511173) | 0.483801 / 0.579283 (-0.095482) | 0.490221 / 0.434364 (0.055857) | 0.589292 / 0.540337 (0.048955) | 0.758532 / 1.386936 (-0.628404) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c9c24d1d90f0c2db043ae2bc39f7c292454a58c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008080 / 0.011353 (-0.003273) | 0.004859 / 0.011008 (-0.006149) | 0.101895 / 0.038508 (0.063387) | 0.091168 / 0.023109 (0.068059) | 0.378914 / 0.275898 (0.103016) | 0.417172 / 0.323480 (0.093692) | 0.006314 / 0.007986 (-0.001672) | 0.004069 / 0.004328 (-0.000259) | 0.076566 / 0.004250 (0.072315) | 0.070986 / 0.037052 (0.033934) | 0.380935 / 0.258489 (0.122446) | 0.417131 / 0.293841 (0.123290) | 0.036343 / 0.128546 (-0.092203) | 0.009996 / 0.075646 (-0.065650) | 0.346386 / 0.419271 (-0.072886) | 0.063162 / 0.043533 (0.019630) | 0.372620 / 0.255139 (0.117481) | 0.404902 / 0.283200 (0.121702) | 0.028217 / 0.141683 (-0.113466) | 1.793875 / 1.452155 (0.341721) | 1.836284 / 1.492716 (0.343568) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223830 / 0.018006 (0.205823) | 0.503643 / 0.000490 (0.503153) | 0.004957 / 0.000200 (0.004757) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035455 / 0.037411 (-0.001957) | 0.108015 / 0.014526 (0.093489) | 0.116887 / 0.176557 (-0.059669) | 0.188174 / 0.737135 (-0.548961) | 0.117217 / 0.296338 (-0.179121) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471681 / 0.215209 (0.256472) | 4.694509 / 2.077655 (2.616855) | 2.369539 / 1.504120 (0.865419) | 2.176839 / 1.541195 (0.635644) | 2.300536 / 1.468490 (0.832045) | 0.575689 / 4.584777 (-4.009088) | 4.232765 / 3.745712 (0.487053) | 4.766775 / 5.269862 (-0.503087) | 2.864667 / 4.565676 (-1.701010) | 0.069390 / 0.424275 (-0.354885) | 0.008822 / 0.007607 (0.001214) | 0.559620 / 0.226044 (0.333576) | 5.580401 / 2.268929 (3.311472) | 2.920293 / 55.444624 (-52.524331) | 2.552166 / 6.876477 (-4.324311) | 2.795890 / 2.142072 (0.653818) | 0.687863 / 4.805227 (-4.117364) | 0.159129 / 6.500664 (-6.341535) | 0.073475 / 0.075469 (-0.001994) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505892 / 1.841788 (-0.335896) | 24.127650 / 8.074308 (16.053342) | 16.758238 / 10.191392 (6.566846) | 0.200555 / 0.680424 (-0.479869) | 0.021596 / 0.534201 (-0.512605) | 0.480668 / 0.579283 (-0.098615) | 0.483528 / 0.434364 (0.049164) | 0.571241 / 0.540337 (0.030903) | 0.790547 / 1.386936 (-0.596390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.004842 / 0.011008 (-0.006166) | 0.077190 / 0.038508 (0.038681) | 0.092765 / 0.023109 (0.069656) | 0.457475 / 0.275898 (0.181577) | 0.523914 / 0.323480 (0.200434) | 0.006349 / 0.007986 (-0.001637) | 0.003902 / 0.004328 (-0.000427) | 0.075860 / 0.004250 (0.071609) | 0.069708 / 0.037052 (0.032656) | 0.459612 / 0.258489 (0.201123) | 0.555028 / 0.293841 (0.261187) | 0.036854 / 0.128546 (-0.091692) | 0.010078 / 0.075646 (-0.065568) | 0.083871 / 0.419271 (-0.335400) | 0.061221 / 0.043533 (0.017689) | 0.435737 / 0.255139 (0.180598) | 0.509700 / 0.283200 (0.226500) | 0.038091 / 0.141683 (-0.103592) | 1.777161 / 1.452155 (0.325006) | 1.859603 / 1.492716 (0.366886) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250020 / 0.018006 (0.232014) | 0.486198 / 0.000490 (0.485708) | 0.007080 / 0.000200 (0.006880) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038163 / 0.037411 (0.000751) | 0.110812 / 0.014526 (0.096286) | 0.122489 / 0.176557 (-0.054068) | 0.188215 / 0.737135 (-0.548920) | 0.122375 / 0.296338 (-0.173963) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.484534 / 0.215209 (0.269325) | 4.828654 / 2.077655 (2.751000) | 2.545102 / 1.504120 (1.040982) | 2.368867 / 1.541195 (0.827672) | 2.458042 / 1.468490 (0.989552) | 0.576372 / 4.584777 (-4.008404) | 4.814033 / 3.745712 (1.068321) | 6.175972 / 5.269862 (0.906110) | 4.033422 / 4.565676 (-0.532254) | 0.068544 / 0.424275 (-0.355731) | 0.008906 / 0.007607 (0.001299) | 0.581767 / 0.226044 (0.355723) | 5.808623 / 2.268929 (3.539695) | 3.120312 / 55.444624 (-52.324313) | 2.774834 / 6.876477 (-4.101642) | 2.770413 / 2.142072 (0.628340) | 0.692715 / 4.805227 (-4.112512) | 0.158883 / 6.500664 (-6.341782) | 0.075894 / 0.075469 (0.000425) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631250 / 1.841788 (-0.210538) | 24.693250 / 8.074308 (16.618942) | 17.434790 / 10.191392 (7.243398) | 0.196456 / 0.680424 (-0.483968) | 0.022505 / 0.534201 (-0.511696) | 0.474788 / 0.579283 (-0.104495) | 0.500947 / 0.434364 (0.066583) | 0.553596 / 0.540337 (0.013259) | 0.737767 / 1.386936 (-0.649169) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f87d6e6394bf4b390ccc82235eb7667f874e5d43 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004115 / 0.011008 (-0.006894) | 0.083934 / 0.038508 (0.045426) | 0.074952 / 0.023109 (0.051843) | 0.313069 / 0.275898 (0.037171) | 0.345878 / 0.323480 (0.022398) | 0.006034 / 0.007986 (-0.001952) | 0.003413 / 0.004328 (-0.000916) | 0.065130 / 0.004250 (0.060880) | 0.057363 / 0.037052 (0.020310) | 0.314483 / 0.258489 (0.055994) | 0.352626 / 0.293841 (0.058785) | 0.031325 / 0.128546 (-0.097221) | 0.008577 / 0.075646 (-0.067069) | 0.288137 / 0.419271 (-0.131135) | 0.053651 / 0.043533 (0.010118) | 0.313006 / 0.255139 (0.057867) | 0.338668 / 0.283200 (0.055468) | 0.023709 / 0.141683 (-0.117974) | 1.481209 / 1.452155 (0.029054) | 1.559801 / 1.492716 (0.067085) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211543 / 0.018006 (0.193537) | 0.452185 / 0.000490 (0.451696) | 0.003177 / 0.000200 (0.002977) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028821 / 0.037411 (-0.008591) | 0.083290 / 0.014526 (0.068765) | 0.097478 / 0.176557 (-0.079079) | 0.153506 / 0.737135 (-0.583629) | 0.097054 / 0.296338 (-0.199284) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385847 / 0.215209 (0.170638) | 3.835629 / 2.077655 (1.757974) | 1.880938 / 1.504120 (0.376819) | 1.711848 / 1.541195 (0.170653) | 1.785099 / 1.468490 (0.316609) | 0.486256 / 4.584777 (-4.098521) | 3.629026 / 3.745712 (-0.116686) | 3.321578 / 5.269862 (-1.948283) | 2.024314 / 4.565676 (-2.541363) | 0.058097 / 0.424275 (-0.366179) | 0.007724 / 0.007607 (0.000117) | 0.458293 / 0.226044 (0.232249) | 4.581314 / 2.268929 (2.312386) | 2.314379 / 55.444624 (-53.130246) | 1.966089 / 6.876477 (-4.910387) | 2.203824 / 2.142072 (0.061752) | 0.611581 / 4.805227 (-4.193647) | 0.149166 / 6.500664 (-6.351498) | 0.059825 / 0.075469 (-0.015644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235546 / 1.841788 (-0.606242) | 19.747439 / 8.074308 (11.673131) | 14.628383 / 10.191392 (4.436991) | 0.193074 / 0.680424 (-0.487350) | 0.020327 / 0.534201 (-0.513874) | 0.397051 / 0.579283 (-0.182232) | 0.418491 / 0.434364 (-0.015873) | 0.462055 / 0.540337 (-0.078282) | 0.637524 / 1.386936 (-0.749412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007069 / 0.011353 (-0.004284) | 0.004106 / 0.011008 (-0.006902) | 0.065818 / 0.038508 (0.027310) | 0.077101 / 0.023109 (0.053991) | 0.363323 / 0.275898 (0.087425) | 0.399463 / 0.323480 (0.075983) | 0.005540 / 0.007986 (-0.002446) | 0.003480 / 0.004328 (-0.000849) | 0.065176 / 0.004250 (0.060926) | 0.060867 / 0.037052 (0.023815) | 0.365763 / 0.258489 (0.107273) | 0.407789 / 0.293841 (0.113949) | 0.032018 / 0.128546 (-0.096528) | 0.008550 / 0.075646 (-0.067096) | 0.071750 / 0.419271 (-0.347521) | 0.050625 / 0.043533 (0.007092) | 0.361434 / 0.255139 (0.106295) | 0.384799 / 0.283200 (0.101599) | 0.026104 / 0.141683 (-0.115579) | 1.496093 / 1.452155 (0.043938) | 1.592909 / 1.492716 (0.100193) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185794 / 0.018006 (0.167787) | 0.453379 / 0.000490 (0.452890) | 0.004365 / 0.000200 (0.004165) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031666 / 0.037411 (-0.005746) | 0.088323 / 0.014526 (0.073798) | 0.104602 / 0.176557 (-0.071954) | 0.159827 / 0.737135 (-0.577308) | 0.103725 / 0.296338 (-0.192614) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413509 / 0.215209 (0.198300) | 4.126071 / 2.077655 (2.048416) | 2.137088 / 1.504120 (0.632968) | 1.981034 / 1.541195 (0.439839) | 2.063660 / 1.468490 (0.595170) | 0.478798 / 4.584777 (-4.105979) | 3.642801 / 3.745712 (-0.102911) | 3.428994 / 5.269862 (-1.840867) | 2.031902 / 4.565676 (-2.533774) | 0.056244 / 0.424275 (-0.368032) | 0.007365 / 0.007607 (-0.000242) | 0.484371 / 0.226044 (0.258327) | 4.838537 / 2.268929 (2.569608) | 2.559497 / 55.444624 (-52.885127) | 2.251863 / 6.876477 (-4.624614) | 2.339227 / 2.142072 (0.197155) | 0.607228 / 4.805227 (-4.198000) | 0.133877 / 6.500664 (-6.366787) | 0.062049 / 0.075469 (-0.013420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350389 / 1.841788 (-0.491399) | 20.060359 / 8.074308 (11.986051) | 14.305675 / 10.191392 (4.114283) | 0.165642 / 0.680424 (-0.514782) | 0.018206 / 0.534201 (-0.515994) | 0.396907 / 0.579283 (-0.182376) | 0.431896 / 0.434364 (-0.002468) | 0.475778 / 0.540337 (-0.064559) | 0.644688 / 1.386936 (-0.742248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8f6fa96ae5de873a49ef28739e8f64edf8b18cae \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009048 / 0.011353 (-0.002305) | 0.005787 / 0.011008 (-0.005221) | 0.111617 / 0.038508 (0.073109) | 0.087603 / 0.023109 (0.064494) | 0.446481 / 0.275898 (0.170583) | 0.491726 / 0.323480 (0.168247) | 0.007052 / 0.007986 (-0.000934) | 0.004481 / 0.004328 (0.000152) | 0.084331 / 0.004250 (0.080081) | 0.072006 / 0.037052 (0.034953) | 0.454238 / 0.258489 (0.195749) | 0.496749 / 0.293841 (0.202908) | 0.049027 / 0.128546 (-0.079520) | 0.014005 / 0.075646 (-0.061641) | 0.372550 / 0.419271 (-0.046722) | 0.071414 / 0.043533 (0.027881) | 0.459432 / 0.255139 (0.204293) | 0.467332 / 0.283200 (0.184133) | 0.037539 / 0.141683 (-0.104144) | 1.869179 / 1.452155 (0.417024) | 1.983641 / 1.492716 (0.490925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265426 / 0.018006 (0.247419) | 0.672527 / 0.000490 (0.672037) | 0.001152 / 0.000200 (0.000953) | 0.000181 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032967 / 0.037411 (-0.004445) | 0.103023 / 0.014526 (0.088497) | 0.115978 / 0.176557 (-0.060578) | 0.191698 / 0.737135 (-0.545438) | 0.117867 / 0.296338 (-0.178471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602208 / 0.215209 (0.386999) | 6.147784 / 2.077655 (4.070129) | 2.768933 / 1.504120 (1.264813) | 2.415619 / 1.541195 (0.874424) | 2.456159 / 1.468490 (0.987669) | 0.836270 / 4.584777 (-3.748507) | 5.447754 / 3.745712 (1.702042) | 7.751825 / 5.269862 (2.481963) | 4.591892 / 4.565676 (0.026215) | 0.108269 / 0.424275 (-0.316006) | 0.009626 / 0.007607 (0.002019) | 0.719260 / 0.226044 (0.493216) | 7.313442 / 2.268929 (5.044514) | 3.490739 / 55.444624 (-51.953885) | 2.743543 / 6.876477 (-4.132934) | 3.035071 / 2.142072 (0.892999) | 1.042791 / 4.805227 (-3.762436) | 0.217080 / 6.500664 (-6.283584) | 0.084286 / 0.075469 (0.008817) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655427 / 1.841788 (-0.186361) | 25.386536 / 8.074308 (17.312228) | 21.740666 / 10.191392 (11.549274) | 0.246388 / 0.680424 (-0.434036) | 0.029723 / 0.534201 (-0.504478) | 0.491537 / 0.579283 (-0.087746) | 0.603495 / 0.434364 (0.169131) | 0.573938 / 0.540337 (0.033600) | 0.981875 / 1.386936 (-0.405061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009664 / 0.011353 (-0.001689) | 0.006446 / 0.011008 (-0.004562) | 0.085113 / 0.038508 (0.046605) | 0.094533 / 0.023109 (0.071424) | 0.498388 / 0.275898 (0.222490) | 0.540127 / 0.323480 (0.216647) | 0.007316 / 0.007986 (-0.000670) | 0.004252 / 0.004328 (-0.000077) | 0.086292 / 0.004250 (0.082041) | 0.067956 / 0.037052 (0.030903) | 0.507664 / 0.258489 (0.249175) | 0.554324 / 0.293841 (0.260483) | 0.050107 / 0.128546 (-0.078439) | 0.014277 / 0.075646 (-0.061370) | 0.098838 / 0.419271 (-0.320433) | 0.066053 / 0.043533 (0.022521) | 0.491090 / 0.255139 (0.235951) | 0.537432 / 0.283200 (0.254232) | 0.035937 / 0.141683 (-0.105746) | 1.820715 / 1.452155 (0.368561) | 1.996268 / 1.492716 (0.503552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300859 / 0.018006 (0.282852) | 0.610958 / 0.000490 (0.610468) | 0.000474 / 0.000200 (0.000274) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036372 / 0.037411 (-0.001039) | 0.109115 / 0.014526 (0.094589) | 0.122802 / 0.176557 (-0.053755) | 0.187092 / 0.737135 (-0.550044) | 0.123432 / 0.296338 (-0.172906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646979 / 0.215209 (0.431770) | 6.577713 / 2.077655 (4.500058) | 3.004606 / 1.504120 (1.500486) | 2.661183 / 1.541195 (1.119989) | 2.726717 / 1.468490 (1.258227) | 0.889497 / 4.584777 (-3.695280) | 5.485055 / 3.745712 (1.739343) | 4.852043 / 5.269862 (-0.417819) | 3.177392 / 4.565676 (-1.388285) | 0.099796 / 0.424275 (-0.324479) | 0.009868 / 0.007607 (0.002261) | 0.819919 / 0.226044 (0.593874) | 7.911255 / 2.268929 (5.642326) | 3.839877 / 55.444624 (-51.604747) | 3.088663 / 6.876477 (-3.787813) | 3.371184 / 2.142072 (1.229112) | 1.072762 / 4.805227 (-3.732466) | 0.224536 / 6.500664 (-6.276128) | 0.083415 / 0.075469 (0.007946) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754426 / 1.841788 (-0.087361) | 25.546690 / 8.074308 (17.472382) | 22.998252 / 10.191392 (12.806860) | 0.258019 / 0.680424 (-0.422405) | 0.030104 / 0.534201 (-0.504097) | 0.518406 / 0.579283 (-0.060877) | 0.605753 / 0.434364 (0.171389) | 0.599630 / 0.540337 (0.059292) | 0.819042 / 1.386936 (-0.567894) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#350f4fd6caabbdfacb5fbf9193ab255c3d0daa4c \"CML watermark\")\n" ]
2023-07-17T15:41:16
2023-07-19T16:59:55
2023-07-19T16:48:06
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6044", "html_url": "https://github.com/huggingface/datasets/pull/6044", "diff_url": "https://github.com/huggingface/datasets/pull/6044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6044.patch", "merged_at": "2023-07-19T16:48:06" }
To make it easier to understand for users. They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s> Glob patterns are still supported though
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6044/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6043/comments
https://api.github.com/repos/huggingface/datasets/issues/6043/events
https://github.com/huggingface/datasets/issues/6043
1,807,771,750
I_kwDODunzps5rwGhm
6,043
Compression kwargs have no effect when saving datasets as csv
{ "login": "exs-avianello", "id": 128361578, "node_id": "U_kgDOB6akag", "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/exs-avianello", "html_url": "https://github.com/exs-avianello", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "repos_url": "https://api.github.com/users/exs-avianello/repos", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hello @exs-avianello, I have reproduced the bug successfully and have understood the problem. But I am confused regarding this part of the statement, \"`pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`\".\r\n\r\nCan you please elaborate on it?\r\n\r\nThanks!", "Hi @aryanxk02 ! Sure, what I actually meant is that when passing a path-like `path_or_buf` here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/arrow_dataset.py#L4708-L4714 \r\n\r\nit gets converted to a file object behind the scenes here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L92-L94\r\n\r\nand the eventual pandas `.to_csv()` calls that write to it always get `path_or_buf=None`, making pandas ignore the `compression` kwarg in the `to_csv_kwargs`\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L107-L109", "@exs-avianello When `path_or_buf` is set to None, the `to_csv()` method will return the CSV data as a string instead of saving it to a file. Hence the compression doesn't take place. I think setting `path_or_buf=self.path_or_buf` should work. What you say?" ]
2023-07-17T13:19:21
2023-07-22T17:34:18
null
NONE
null
null
null
### Describe the bug Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed. A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix. ### Steps to reproduce the bug ```python # dataset is not compressed (but at least a warning is emitted) import datasets dataset = datasets.load_dataset("rotten_tomatoes", split="train") dataset.to_csv("uncompressed.csv") print(os.path.getsize("uncompressed.csv")) # 1008607 dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}) print(os.path.getsize("compressed.csv.gz")) # 1008607 ``` ```shell >>> RuntimeWarning: compression has no effect when passing a non-binary object as input. csv_str = batch.to_pandas().to_csv( ``` ```python # dataset is not compressed and no warnings are emitted dataset.to_csv("compressed.csv.gz") print(os.path.getsize("compressed.csv.gz")) # 1008607 # compare with dataset.to_pandas().to_csv("pandas.csv.gz") print(os.path.getsize("pandas.csv.gz")) # 418561 ``` --- I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg. ### Expected behavior The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'` ### Environment info `datasets == 2.13.1`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6043/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6042/comments
https://api.github.com/repos/huggingface/datasets/issues/6042/events
https://github.com/huggingface/datasets/pull/6042
1,807,516,762
PR_kwDODunzps5VqEyb
6,042
Fix unused DatasetInfosDict code in push_to_hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008634 / 0.011353 (-0.002719) | 0.005147 / 0.011008 (-0.005861) | 0.102865 / 0.038508 (0.064357) | 0.080245 / 0.023109 (0.057136) | 0.401288 / 0.275898 (0.125390) | 0.419708 / 0.323480 (0.096228) | 0.006342 / 0.007986 (-0.001644) | 0.003998 / 0.004328 (-0.000330) | 0.078880 / 0.004250 (0.074630) | 0.068199 / 0.037052 (0.031147) | 0.389573 / 0.258489 (0.131084) | 0.417292 / 0.293841 (0.123451) | 0.048856 / 0.128546 (-0.079691) | 0.014165 / 0.075646 (-0.061481) | 0.348063 / 0.419271 (-0.071209) | 0.067547 / 0.043533 (0.024014) | 0.402251 / 0.255139 (0.147112) | 0.419478 / 0.283200 (0.136278) | 0.034846 / 0.141683 (-0.106837) | 1.773493 / 1.452155 (0.321338) | 1.930546 / 1.492716 (0.437830) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211835 / 0.018006 (0.193829) | 0.545311 / 0.000490 (0.544821) | 0.006766 / 0.000200 (0.006566) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035406 / 0.037411 (-0.002006) | 0.100769 / 0.014526 (0.086243) | 0.108667 / 0.176557 (-0.067890) | 0.193099 / 0.737135 (-0.544036) | 0.113539 / 0.296338 (-0.182799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586935 / 0.215209 (0.371726) | 5.895245 / 2.077655 (3.817591) | 2.528375 / 1.504120 (1.024255) | 2.228617 / 1.541195 (0.687423) | 2.295799 / 1.468490 (0.827309) | 0.859272 / 4.584777 (-3.725505) | 5.033434 / 3.745712 (1.287722) | 7.546587 / 5.269862 (2.276726) | 4.457137 / 4.565676 (-0.108539) | 0.099626 / 0.424275 (-0.324649) | 0.009296 / 0.007607 (0.001689) | 0.713498 / 0.226044 (0.487454) | 7.409385 / 2.268929 (5.140456) | 3.361418 / 55.444624 (-52.083206) | 2.681111 / 6.876477 (-4.195366) | 2.849598 / 2.142072 (0.707526) | 1.114863 / 4.805227 (-3.690364) | 0.215494 / 6.500664 (-6.285170) | 0.075807 / 0.075469 (0.000338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.606458 / 1.841788 (-0.235330) | 23.751096 / 8.074308 (15.676788) | 21.279110 / 10.191392 (11.087718) | 0.220785 / 0.680424 (-0.459639) | 0.032688 / 0.534201 (-0.501513) | 0.530948 / 0.579283 (-0.048335) | 0.630056 / 0.434364 (0.195693) | 0.572743 / 0.540337 (0.032405) | 0.771853 / 1.386936 (-0.615083) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008693 / 0.011353 (-0.002660) | 0.004750 / 0.011008 (-0.006259) | 0.079764 / 0.038508 (0.041256) | 0.082096 / 0.023109 (0.058987) | 0.467198 / 0.275898 (0.191300) | 0.532361 / 0.323480 (0.208881) | 0.005836 / 0.007986 (-0.002149) | 0.004333 / 0.004328 (0.000005) | 0.080444 / 0.004250 (0.076194) | 0.065883 / 0.037052 (0.028831) | 0.464871 / 0.258489 (0.206382) | 0.575026 / 0.293841 (0.281185) | 0.057807 / 0.128546 (-0.070739) | 0.017462 / 0.075646 (-0.058185) | 0.093667 / 0.419271 (-0.325605) | 0.071466 / 0.043533 (0.027933) | 0.495846 / 0.255139 (0.240707) | 0.526100 / 0.283200 (0.242900) | 0.034852 / 0.141683 (-0.106831) | 1.884152 / 1.452155 (0.431998) | 1.922681 / 1.492716 (0.429965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250969 / 0.018006 (0.232963) | 0.504979 / 0.000490 (0.504489) | 0.000466 / 0.000200 (0.000266) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032411 / 0.037411 (-0.005000) | 0.093184 / 0.014526 (0.078658) | 0.110798 / 0.176557 (-0.065759) | 0.165741 / 0.737135 (-0.571394) | 0.111022 / 0.296338 (-0.185317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.661284 / 0.215209 (0.446075) | 6.622388 / 2.077655 (4.544733) | 3.095705 / 1.504120 (1.591585) | 2.745698 / 1.541195 (1.204503) | 2.694103 / 1.468490 (1.225612) | 0.862154 / 4.584777 (-3.722623) | 5.109985 / 3.745712 (1.364273) | 5.040362 / 5.269862 (-0.229499) | 3.072837 / 4.565676 (-1.492840) | 0.110421 / 0.424275 (-0.313854) | 0.008476 / 0.007607 (0.000869) | 0.910020 / 0.226044 (0.683975) | 8.123626 / 2.268929 (5.854698) | 3.813811 / 55.444624 (-51.630813) | 3.017244 / 6.876477 (-3.859232) | 3.061222 / 2.142072 (0.919150) | 1.073548 / 4.805227 (-3.731680) | 0.216327 / 6.500664 (-6.284338) | 0.072977 / 0.075469 (-0.002492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.722482 / 1.841788 (-0.119305) | 23.706716 / 8.074308 (15.632407) | 23.192134 / 10.191392 (13.000742) | 0.276733 / 0.680424 (-0.403691) | 0.033538 / 0.534201 (-0.500663) | 0.602083 / 0.579283 (0.022799) | 0.578718 / 0.434364 (0.144354) | 0.558311 / 0.540337 (0.017974) | 0.740341 / 1.386936 (-0.646595) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ac575b8ed57dac60d7ba33a616894f38601f84a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006862 / 0.011353 (-0.004491) | 0.004223 / 0.011008 (-0.006786) | 0.085931 / 0.038508 (0.047423) | 0.081437 / 0.023109 (0.058328) | 0.349542 / 0.275898 (0.073644) | 0.379881 / 0.323480 (0.056401) | 0.005651 / 0.007986 (-0.002334) | 0.003662 / 0.004328 (-0.000666) | 0.065251 / 0.004250 (0.061001) | 0.061599 / 0.037052 (0.024547) | 0.359681 / 0.258489 (0.101192) | 0.392502 / 0.293841 (0.098661) | 0.031300 / 0.128546 (-0.097246) | 0.008591 / 0.075646 (-0.067055) | 0.288577 / 0.419271 (-0.130694) | 0.062920 / 0.043533 (0.019388) | 0.348989 / 0.255139 (0.093850) | 0.362769 / 0.283200 (0.079569) | 0.030087 / 0.141683 (-0.111596) | 1.480748 / 1.452155 (0.028594) | 1.580413 / 1.492716 (0.087697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205804 / 0.018006 (0.187798) | 0.455386 / 0.000490 (0.454897) | 0.003134 / 0.000200 (0.002934) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030252 / 0.037411 (-0.007159) | 0.087566 / 0.014526 (0.073041) | 0.098209 / 0.176557 (-0.078347) | 0.155816 / 0.737135 (-0.581319) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386688 / 0.215209 (0.171479) | 3.852777 / 2.077655 (1.775123) | 1.938688 / 1.504120 (0.434568) | 1.779234 / 1.541195 (0.238039) | 1.864262 / 1.468490 (0.395772) | 0.482472 / 4.584777 (-4.102305) | 3.658060 / 3.745712 (-0.087652) | 5.206489 / 5.269862 (-0.063373) | 3.262498 / 4.565676 (-1.303179) | 0.057523 / 0.424275 (-0.366752) | 0.007365 / 0.007607 (-0.000242) | 0.466886 / 0.226044 (0.240841) | 4.671026 / 2.268929 (2.402097) | 2.380357 / 55.444624 (-53.064268) | 2.096590 / 6.876477 (-4.779887) | 2.274415 / 2.142072 (0.132342) | 0.579705 / 4.805227 (-4.225522) | 0.134522 / 6.500664 (-6.366142) | 0.062232 / 0.075469 (-0.013237) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245965 / 1.841788 (-0.595823) | 20.115180 / 8.074308 (12.040872) | 14.602983 / 10.191392 (4.411591) | 0.146890 / 0.680424 (-0.533533) | 0.018424 / 0.534201 (-0.515777) | 0.393941 / 0.579283 (-0.185342) | 0.413785 / 0.434364 (-0.020579) | 0.453344 / 0.540337 (-0.086993) | 0.655446 / 1.386936 (-0.731490) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004083 / 0.011008 (-0.006925) | 0.065389 / 0.038508 (0.026881) | 0.081056 / 0.023109 (0.057947) | 0.362823 / 0.275898 (0.086925) | 0.401928 / 0.323480 (0.078448) | 0.005452 / 0.007986 (-0.002533) | 0.003413 / 0.004328 (-0.000915) | 0.065238 / 0.004250 (0.060987) | 0.057264 / 0.037052 (0.020211) | 0.375713 / 0.258489 (0.117224) | 0.407858 / 0.293841 (0.114017) | 0.031580 / 0.128546 (-0.096966) | 0.008643 / 0.075646 (-0.067003) | 0.071693 / 0.419271 (-0.347578) | 0.049392 / 0.043533 (0.005859) | 0.370194 / 0.255139 (0.115055) | 0.384647 / 0.283200 (0.101447) | 0.024805 / 0.141683 (-0.116877) | 1.509511 / 1.452155 (0.057356) | 1.560193 / 1.492716 (0.067477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234442 / 0.018006 (0.216436) | 0.458818 / 0.000490 (0.458329) | 0.000407 / 0.000200 (0.000207) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031661 / 0.037411 (-0.005750) | 0.093143 / 0.014526 (0.078618) | 0.102205 / 0.176557 (-0.074352) | 0.155850 / 0.737135 (-0.581286) | 0.104345 / 0.296338 (-0.191994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419641 / 0.215209 (0.204432) | 4.200808 / 2.077655 (2.123153) | 2.218227 / 1.504120 (0.714107) | 2.052604 / 1.541195 (0.511409) | 2.150611 / 1.468490 (0.682121) | 0.482665 / 4.584777 (-4.102112) | 3.606541 / 3.745712 (-0.139172) | 3.310637 / 5.269862 (-1.959224) | 2.070200 / 4.565676 (-2.495476) | 0.056586 / 0.424275 (-0.367689) | 0.007826 / 0.007607 (0.000218) | 0.491037 / 0.226044 (0.264992) | 4.901538 / 2.268929 (2.632610) | 2.676402 / 55.444624 (-52.768223) | 2.363935 / 6.876477 (-4.512542) | 2.587813 / 2.142072 (0.445741) | 0.579302 / 4.805227 (-4.225926) | 0.132792 / 6.500664 (-6.367873) | 0.061865 / 0.075469 (-0.013604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354315 / 1.841788 (-0.487473) | 20.874516 / 8.074308 (12.800208) | 14.863559 / 10.191392 (4.672167) | 0.183635 / 0.680424 (-0.496789) | 0.018636 / 0.534201 (-0.515565) | 0.395317 / 0.579283 (-0.183966) | 0.410598 / 0.434364 (-0.023766) | 0.476485 / 0.540337 (-0.063853) | 0.643246 / 1.386936 (-0.743690) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4472a8795c603a95eef7c8f15cb04f1290cc8d11 \"CML watermark\")\n" ]
2023-07-17T11:03:09
2023-07-18T16:17:52
2023-07-18T16:08:42
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6042", "html_url": "https://github.com/huggingface/datasets/pull/6042", "diff_url": "https://github.com/huggingface/datasets/pull/6042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6042.patch", "merged_at": "2023-07-18T16:08:42" }
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6042/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6042/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6041/comments
https://api.github.com/repos/huggingface/datasets/issues/6041/events
https://github.com/huggingface/datasets/pull/6041
1,807,441,055
PR_kwDODunzps5Vp0GX
6,041
Flatten repository_structure docs on yaml
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6041). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007587 / 0.011353 (-0.003766) | 0.004469 / 0.011008 (-0.006540) | 0.098028 / 0.038508 (0.059520) | 0.086378 / 0.023109 (0.063269) | 0.412290 / 0.275898 (0.136392) | 0.449912 / 0.323480 (0.126432) | 0.004769 / 0.007986 (-0.003217) | 0.003708 / 0.004328 (-0.000621) | 0.075541 / 0.004250 (0.071290) | 0.063821 / 0.037052 (0.026768) | 0.417213 / 0.258489 (0.158724) | 0.471954 / 0.293841 (0.178113) | 0.036243 / 0.128546 (-0.092303) | 0.009540 / 0.075646 (-0.066106) | 0.339043 / 0.419271 (-0.080228) | 0.061853 / 0.043533 (0.018320) | 0.418510 / 0.255139 (0.163371) | 0.462372 / 0.283200 (0.179173) | 0.027328 / 0.141683 (-0.114355) | 1.745114 / 1.452155 (0.292959) | 1.879839 / 1.492716 (0.387123) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211042 / 0.018006 (0.193035) | 0.512865 / 0.000490 (0.512375) | 0.008744 / 0.000200 (0.008544) | 0.000121 / 0.000054 (0.000066) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032493 / 0.037411 (-0.004918) | 0.096472 / 0.014526 (0.081946) | 0.110340 / 0.176557 (-0.066216) | 0.183195 / 0.737135 (-0.553940) | 0.112829 / 0.296338 (-0.183510) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478040 / 0.215209 (0.262830) | 4.743776 / 2.077655 (2.666121) | 2.389770 / 1.504120 (0.885650) | 2.168468 / 1.541195 (0.627274) | 2.238154 / 1.468490 (0.769663) | 0.572308 / 4.584777 (-4.012469) | 4.154783 / 3.745712 (0.409071) | 3.771509 / 5.269862 (-1.498353) | 2.384828 / 4.565676 (-2.180848) | 0.068122 / 0.424275 (-0.356153) | 0.008573 / 0.007607 (0.000965) | 0.560300 / 0.226044 (0.334256) | 5.591163 / 2.268929 (3.322235) | 2.929660 / 55.444624 (-52.514965) | 2.517721 / 6.876477 (-4.358756) | 2.762285 / 2.142072 (0.620213) | 0.687193 / 4.805227 (-4.118034) | 0.157839 / 6.500664 (-6.342825) | 0.071862 / 0.075469 (-0.003607) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.484788 / 1.841788 (-0.357000) | 21.696071 / 8.074308 (13.621763) | 15.476166 / 10.191392 (5.284774) | 0.185034 / 0.680424 (-0.495390) | 0.021181 / 0.534201 (-0.513020) | 0.463324 / 0.579283 (-0.115959) | 0.502455 / 0.434364 (0.068091) | 0.559880 / 0.540337 (0.019543) | 0.767281 / 1.386936 (-0.619655) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007572 / 0.011353 (-0.003781) | 0.004331 / 0.011008 (-0.006677) | 0.075023 / 0.038508 (0.036515) | 0.085474 / 0.023109 (0.062365) | 0.464900 / 0.275898 (0.189002) | 0.503348 / 0.323480 (0.179868) | 0.006885 / 0.007986 (-0.001101) | 0.003647 / 0.004328 (-0.000681) | 0.074874 / 0.004250 (0.070623) | 0.071076 / 0.037052 (0.034024) | 0.465495 / 0.258489 (0.207006) | 0.506418 / 0.293841 (0.212577) | 0.038900 / 0.128546 (-0.089647) | 0.009467 / 0.075646 (-0.066180) | 0.082547 / 0.419271 (-0.336724) | 0.058457 / 0.043533 (0.014924) | 0.459114 / 0.255139 (0.203975) | 0.484872 / 0.283200 (0.201673) | 0.027443 / 0.141683 (-0.114240) | 1.713996 / 1.452155 (0.261841) | 1.893639 / 1.492716 (0.400922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248693 / 0.018006 (0.230687) | 0.488805 / 0.000490 (0.488315) | 0.000421 / 0.000200 (0.000221) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034886 / 0.037411 (-0.002525) | 0.103215 / 0.014526 (0.088689) | 0.116422 / 0.176557 (-0.060134) | 0.182789 / 0.737135 (-0.554346) | 0.117788 / 0.296338 (-0.178550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.482782 / 0.215209 (0.267573) | 4.802895 / 2.077655 (2.725241) | 2.489823 / 1.504120 (0.985703) | 2.324005 / 1.541195 (0.782810) | 2.457674 / 1.468490 (0.989184) | 0.566980 / 4.584777 (-4.017797) | 4.117359 / 3.745712 (0.371647) | 3.841180 / 5.269862 (-1.428681) | 2.322410 / 4.565676 (-2.243266) | 0.066367 / 0.424275 (-0.357908) | 0.008501 / 0.007607 (0.000894) | 0.561453 / 0.226044 (0.335408) | 5.694861 / 2.268929 (3.425932) | 3.129829 / 55.444624 (-52.314796) | 2.647375 / 6.876477 (-4.229102) | 2.673071 / 2.142072 (0.530998) | 0.676120 / 4.805227 (-4.129108) | 0.153483 / 6.500664 (-6.347181) | 0.070797 / 0.075469 (-0.004672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.575697 / 1.841788 (-0.266091) | 22.447462 / 8.074308 (14.373154) | 15.964906 / 10.191392 (5.773514) | 0.218343 / 0.680424 (-0.462081) | 0.021051 / 0.534201 (-0.513150) | 0.466079 / 0.579283 (-0.113204) | 0.493190 / 0.434364 (0.058826) | 0.565929 / 0.540337 (0.025592) | 0.768638 / 1.386936 (-0.618298) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#104bafffef7ddc775ec2d0b10b2b262466041eb7 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006268 / 0.011353 (-0.005085) | 0.003715 / 0.011008 (-0.007293) | 0.080628 / 0.038508 (0.042120) | 0.070294 / 0.023109 (0.047185) | 0.404749 / 0.275898 (0.128851) | 0.434130 / 0.323480 (0.110650) | 0.005533 / 0.007986 (-0.002452) | 0.002980 / 0.004328 (-0.001349) | 0.063016 / 0.004250 (0.058766) | 0.051667 / 0.037052 (0.014615) | 0.403859 / 0.258489 (0.145370) | 0.437913 / 0.293841 (0.144073) | 0.027518 / 0.128546 (-0.101029) | 0.007991 / 0.075646 (-0.067655) | 0.260723 / 0.419271 (-0.158548) | 0.046580 / 0.043533 (0.003047) | 0.405453 / 0.255139 (0.150314) | 0.428390 / 0.283200 (0.145190) | 0.022774 / 0.141683 (-0.118909) | 1.488204 / 1.452155 (0.036049) | 1.536557 / 1.492716 (0.043841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185864 / 0.018006 (0.167858) | 0.431388 / 0.000490 (0.430898) | 0.003743 / 0.000200 (0.003543) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024062 / 0.037411 (-0.013350) | 0.075749 / 0.014526 (0.061224) | 0.083519 / 0.176557 (-0.093037) | 0.147965 / 0.737135 (-0.589170) | 0.085635 / 0.296338 (-0.210703) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400455 / 0.215209 (0.185246) | 4.084294 / 2.077655 (2.006640) | 1.928795 / 1.504120 (0.424675) | 1.743205 / 1.541195 (0.202010) | 1.811233 / 1.468490 (0.342743) | 0.504976 / 4.584777 (-4.079801) | 3.073134 / 3.745712 (-0.672578) | 2.816357 / 5.269862 (-2.453505) | 1.857462 / 4.565676 (-2.708214) | 0.058329 / 0.424275 (-0.365946) | 0.006850 / 0.007607 (-0.000757) | 0.466017 / 0.226044 (0.239973) | 4.660158 / 2.268929 (2.391230) | 2.396614 / 55.444624 (-53.048010) | 2.007491 / 6.876477 (-4.868986) | 2.206997 / 2.142072 (0.064925) | 0.592233 / 4.805227 (-4.212994) | 0.125364 / 6.500664 (-6.375300) | 0.061166 / 0.075469 (-0.014303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290148 / 1.841788 (-0.551640) | 18.317462 / 8.074308 (10.243154) | 13.465142 / 10.191392 (3.273750) | 0.149696 / 0.680424 (-0.530728) | 0.017120 / 0.534201 (-0.517081) | 0.334818 / 0.579283 (-0.244465) | 0.363976 / 0.434364 (-0.070388) | 0.388271 / 0.540337 (-0.152066) | 0.542383 / 1.386936 (-0.844553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006029 / 0.011353 (-0.005324) | 0.003656 / 0.011008 (-0.007352) | 0.063518 / 0.038508 (0.025010) | 0.058214 / 0.023109 (0.035105) | 0.435987 / 0.275898 (0.160089) | 0.442769 / 0.323480 (0.119289) | 0.004675 / 0.007986 (-0.003310) | 0.002911 / 0.004328 (-0.001418) | 0.063020 / 0.004250 (0.058769) | 0.049422 / 0.037052 (0.012369) | 0.435521 / 0.258489 (0.177032) | 0.478251 / 0.293841 (0.184411) | 0.027294 / 0.128546 (-0.101252) | 0.008073 / 0.075646 (-0.067574) | 0.068397 / 0.419271 (-0.350875) | 0.044796 / 0.043533 (0.001263) | 0.416646 / 0.255139 (0.161507) | 0.435021 / 0.283200 (0.151821) | 0.024686 / 0.141683 (-0.116997) | 1.495650 / 1.452155 (0.043496) | 1.495846 / 1.492716 (0.003130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211205 / 0.018006 (0.193199) | 0.414497 / 0.000490 (0.414007) | 0.001704 / 0.000200 (0.001504) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025237 / 0.037411 (-0.012174) | 0.077291 / 0.014526 (0.062765) | 0.085736 / 0.176557 (-0.090821) | 0.141059 / 0.737135 (-0.596076) | 0.087620 / 0.296338 (-0.208719) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421995 / 0.215209 (0.206786) | 4.158503 / 2.077655 (2.080849) | 2.313598 / 1.504120 (0.809479) | 2.183553 / 1.541195 (0.642359) | 2.279656 / 1.468490 (0.811166) | 0.500146 / 4.584777 (-4.084631) | 3.092654 / 3.745712 (-0.653059) | 4.371616 / 5.269862 (-0.898245) | 2.605096 / 4.565676 (-1.960581) | 0.057658 / 0.424275 (-0.366617) | 0.006574 / 0.007607 (-0.001033) | 0.491455 / 0.226044 (0.265411) | 4.926730 / 2.268929 (2.657801) | 2.635749 / 55.444624 (-52.808875) | 2.255780 / 6.876477 (-4.620697) | 2.305547 / 2.142072 (0.163474) | 0.589027 / 4.805227 (-4.216200) | 0.126229 / 6.500664 (-6.374435) | 0.063268 / 0.075469 (-0.012201) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299102 / 1.841788 (-0.542686) | 18.547417 / 8.074308 (10.473109) | 13.860030 / 10.191392 (3.668638) | 0.145482 / 0.680424 (-0.534942) | 0.016543 / 0.534201 (-0.517658) | 0.330788 / 0.579283 (-0.248496) | 0.362020 / 0.434364 (-0.072344) | 0.380635 / 0.540337 (-0.159703) | 0.517375 / 1.386936 (-0.869561) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bf602e0193baca21e283babbac9622ae36d1e6b6 \"CML watermark\")\n" ]
2023-07-17T10:15:10
2023-07-17T10:24:51
2023-07-17T10:16:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6041", "html_url": "https://github.com/huggingface/datasets/pull/6041", "diff_url": "https://github.com/huggingface/datasets/pull/6041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6041.patch", "merged_at": "2023-07-17T10:16:22" }
To have Splits, Configurations and Builder parameters at the same doc level
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6041/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6040/comments
https://api.github.com/repos/huggingface/datasets/issues/6040/events
https://github.com/huggingface/datasets/pull/6040
1,807,410,238
PR_kwDODunzps5VptVf
6,040
Fix legacy_dataset_infos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006087 / 0.011353 (-0.005265) | 0.003567 / 0.011008 (-0.007442) | 0.079668 / 0.038508 (0.041160) | 0.063647 / 0.023109 (0.040538) | 0.323082 / 0.275898 (0.047184) | 0.348679 / 0.323480 (0.025199) | 0.004726 / 0.007986 (-0.003259) | 0.002955 / 0.004328 (-0.001373) | 0.062724 / 0.004250 (0.058473) | 0.050194 / 0.037052 (0.013142) | 0.321407 / 0.258489 (0.062918) | 0.355053 / 0.293841 (0.061212) | 0.026992 / 0.128546 (-0.101554) | 0.007994 / 0.075646 (-0.067653) | 0.260562 / 0.419271 (-0.158710) | 0.050933 / 0.043533 (0.007400) | 0.316644 / 0.255139 (0.061505) | 0.336759 / 0.283200 (0.053560) | 0.022581 / 0.141683 (-0.119101) | 1.481259 / 1.452155 (0.029104) | 1.535191 / 1.492716 (0.042475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194111 / 0.018006 (0.176104) | 0.448146 / 0.000490 (0.447656) | 0.000321 / 0.000200 (0.000121) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023908 / 0.037411 (-0.013503) | 0.073316 / 0.014526 (0.058790) | 0.085588 / 0.176557 (-0.090968) | 0.145377 / 0.737135 (-0.591759) | 0.084788 / 0.296338 (-0.211550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439327 / 0.215209 (0.224118) | 4.384833 / 2.077655 (2.307179) | 2.322943 / 1.504120 (0.818823) | 2.147737 / 1.541195 (0.606542) | 2.226725 / 1.468490 (0.758235) | 0.502957 / 4.584777 (-4.081820) | 3.098106 / 3.745712 (-0.647606) | 4.194642 / 5.269862 (-1.075220) | 2.598820 / 4.565676 (-1.966856) | 0.057942 / 0.424275 (-0.366333) | 0.006857 / 0.007607 (-0.000750) | 0.511517 / 0.226044 (0.285472) | 5.121797 / 2.268929 (2.852868) | 2.756506 / 55.444624 (-52.688118) | 2.424602 / 6.876477 (-4.451875) | 2.608342 / 2.142072 (0.466270) | 0.589498 / 4.805227 (-4.215729) | 0.126065 / 6.500664 (-6.374600) | 0.061456 / 0.075469 (-0.014013) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239928 / 1.841788 (-0.601860) | 18.423532 / 8.074308 (10.349224) | 13.935148 / 10.191392 (3.743756) | 0.129913 / 0.680424 (-0.550511) | 0.016744 / 0.534201 (-0.517457) | 0.333468 / 0.579283 (-0.245815) | 0.359615 / 0.434364 (-0.074749) | 0.383678 / 0.540337 (-0.156659) | 0.533007 / 1.386936 (-0.853929) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005980 / 0.011353 (-0.005373) | 0.003640 / 0.011008 (-0.007368) | 0.062500 / 0.038508 (0.023992) | 0.059843 / 0.023109 (0.036733) | 0.360993 / 0.275898 (0.085095) | 0.401981 / 0.323480 (0.078501) | 0.005495 / 0.007986 (-0.002490) | 0.002862 / 0.004328 (-0.001467) | 0.062491 / 0.004250 (0.058240) | 0.050778 / 0.037052 (0.013726) | 0.371007 / 0.258489 (0.112518) | 0.405154 / 0.293841 (0.111313) | 0.027390 / 0.128546 (-0.101156) | 0.008042 / 0.075646 (-0.067604) | 0.067590 / 0.419271 (-0.351681) | 0.042485 / 0.043533 (-0.001048) | 0.361305 / 0.255139 (0.106166) | 0.388669 / 0.283200 (0.105469) | 0.024143 / 0.141683 (-0.117540) | 1.451508 / 1.452155 (-0.000647) | 1.490431 / 1.492716 (-0.002285) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.175976 / 0.018006 (0.157970) | 0.428923 / 0.000490 (0.428434) | 0.002099 / 0.000200 (0.001899) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026346 / 0.037411 (-0.011065) | 0.078084 / 0.014526 (0.063558) | 0.087287 / 0.176557 (-0.089269) | 0.144179 / 0.737135 (-0.592957) | 0.088286 / 0.296338 (-0.208053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450436 / 0.215209 (0.235227) | 4.488801 / 2.077655 (2.411146) | 2.479303 / 1.504120 (0.975184) | 2.305396 / 1.541195 (0.764201) | 2.370370 / 1.468490 (0.901879) | 0.502355 / 4.584777 (-4.082422) | 3.094733 / 3.745712 (-0.650979) | 4.062367 / 5.269862 (-1.207495) | 2.587506 / 4.565676 (-1.978170) | 0.058245 / 0.424275 (-0.366030) | 0.006487 / 0.007607 (-0.001120) | 0.524147 / 0.226044 (0.298102) | 5.236876 / 2.268929 (2.967947) | 2.897134 / 55.444624 (-52.547490) | 2.574631 / 6.876477 (-4.301846) | 2.620307 / 2.142072 (0.478235) | 0.586963 / 4.805227 (-4.218265) | 0.125761 / 6.500664 (-6.374903) | 0.062264 / 0.075469 (-0.013205) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299668 / 1.841788 (-0.542120) | 19.004441 / 8.074308 (10.930133) | 13.841867 / 10.191392 (3.650475) | 0.159674 / 0.680424 (-0.520750) | 0.016699 / 0.534201 (-0.517502) | 0.331868 / 0.579283 (-0.247415) | 0.344604 / 0.434364 (-0.089760) | 0.379391 / 0.540337 (-0.160947) | 0.514790 / 1.386936 (-0.872146) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#47a006a90e9711b33db70b0ef2d2cefaadfa2179 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005792 / 0.011353 (-0.005561) | 0.003519 / 0.011008 (-0.007489) | 0.079133 / 0.038508 (0.040625) | 0.057858 / 0.023109 (0.034749) | 0.314206 / 0.275898 (0.038308) | 0.346939 / 0.323480 (0.023459) | 0.004583 / 0.007986 (-0.003403) | 0.002824 / 0.004328 (-0.001504) | 0.061652 / 0.004250 (0.057402) | 0.048520 / 0.037052 (0.011467) | 0.318018 / 0.258489 (0.059529) | 0.350350 / 0.293841 (0.056509) | 0.026284 / 0.128546 (-0.102262) | 0.007827 / 0.075646 (-0.067819) | 0.259624 / 0.419271 (-0.159647) | 0.052318 / 0.043533 (0.008786) | 0.317400 / 0.255139 (0.062261) | 0.340530 / 0.283200 (0.057331) | 0.025181 / 0.141683 (-0.116501) | 1.459208 / 1.452155 (0.007053) | 1.529158 / 1.492716 (0.036442) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.169692 / 0.018006 (0.151686) | 0.432638 / 0.000490 (0.432148) | 0.003675 / 0.000200 (0.003475) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022956 / 0.037411 (-0.014456) | 0.071860 / 0.014526 (0.057334) | 0.082159 / 0.176557 (-0.094398) | 0.142560 / 0.737135 (-0.594576) | 0.082333 / 0.296338 (-0.214006) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397923 / 0.215209 (0.182714) | 3.958757 / 2.077655 (1.881102) | 1.925837 / 1.504120 (0.421717) | 1.758114 / 1.541195 (0.216919) | 1.808845 / 1.468490 (0.340354) | 0.501116 / 4.584777 (-4.083661) | 3.007739 / 3.745712 (-0.737973) | 3.295755 / 5.269862 (-1.974106) | 2.123843 / 4.565676 (-2.441833) | 0.057174 / 0.424275 (-0.367101) | 0.006426 / 0.007607 (-0.001182) | 0.468196 / 0.226044 (0.242152) | 4.677392 / 2.268929 (2.408464) | 2.334179 / 55.444624 (-53.110446) | 1.989283 / 6.876477 (-4.887194) | 2.140091 / 2.142072 (-0.001981) | 0.590700 / 4.805227 (-4.214527) | 0.124066 / 6.500664 (-6.376598) | 0.059931 / 0.075469 (-0.015538) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224547 / 1.841788 (-0.617240) | 17.866979 / 8.074308 (9.792671) | 13.142009 / 10.191392 (2.950617) | 0.147081 / 0.680424 (-0.533343) | 0.016777 / 0.534201 (-0.517424) | 0.327766 / 0.579283 (-0.251517) | 0.343988 / 0.434364 (-0.090376) | 0.383268 / 0.540337 (-0.157070) | 0.528109 / 1.386936 (-0.858827) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006145 / 0.011353 (-0.005208) | 0.003634 / 0.011008 (-0.007374) | 0.062887 / 0.038508 (0.024379) | 0.062659 / 0.023109 (0.039550) | 0.362962 / 0.275898 (0.087064) | 0.405149 / 0.323480 (0.081669) | 0.004821 / 0.007986 (-0.003164) | 0.002888 / 0.004328 (-0.001441) | 0.062982 / 0.004250 (0.058732) | 0.051929 / 0.037052 (0.014877) | 0.366825 / 0.258489 (0.108336) | 0.409830 / 0.293841 (0.115989) | 0.027263 / 0.128546 (-0.101283) | 0.007972 / 0.075646 (-0.067674) | 0.067413 / 0.419271 (-0.351858) | 0.044233 / 0.043533 (0.000700) | 0.365087 / 0.255139 (0.109948) | 0.393845 / 0.283200 (0.110646) | 0.027740 / 0.141683 (-0.113943) | 1.497896 / 1.452155 (0.045741) | 1.549419 / 1.492716 (0.056703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225510 / 0.018006 (0.207503) | 0.417054 / 0.000490 (0.416564) | 0.002184 / 0.000200 (0.001984) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025503 / 0.037411 (-0.011908) | 0.076164 / 0.014526 (0.061638) | 0.086110 / 0.176557 (-0.090446) | 0.140387 / 0.737135 (-0.596748) | 0.086956 / 0.296338 (-0.209382) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469667 / 0.215209 (0.254458) | 4.689915 / 2.077655 (2.612261) | 2.685000 / 1.504120 (1.180880) | 2.516160 / 1.541195 (0.974965) | 2.531733 / 1.468490 (1.063243) | 0.501675 / 4.584777 (-4.083102) | 3.000579 / 3.745712 (-0.745133) | 2.853376 / 5.269862 (-2.416486) | 1.810677 / 4.565676 (-2.754999) | 0.057632 / 0.424275 (-0.366643) | 0.006390 / 0.007607 (-0.001217) | 0.543986 / 0.226044 (0.317941) | 5.432837 / 2.268929 (3.163908) | 3.138797 / 55.444624 (-52.305827) | 2.813141 / 6.876477 (-4.063336) | 2.803681 / 2.142072 (0.661609) | 0.588736 / 4.805227 (-4.216491) | 0.125696 / 6.500664 (-6.374968) | 0.062492 / 0.075469 (-0.012977) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337163 / 1.841788 (-0.504624) | 18.611715 / 8.074308 (10.537407) | 13.953016 / 10.191392 (3.761624) | 0.154670 / 0.680424 (-0.525754) | 0.016523 / 0.534201 (-0.517678) | 0.333898 / 0.579283 (-0.245385) | 0.336520 / 0.434364 (-0.097844) | 0.389032 / 0.540337 (-0.151305) | 0.529202 / 1.386936 (-0.857734) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01d4b3330f2cc243a3f3b0cd61ec5558466c40fd \"CML watermark\")\n" ]
2023-07-17T09:56:21
2023-07-17T10:24:34
2023-07-17T10:16:03
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6040", "html_url": "https://github.com/huggingface/datasets/pull/6040", "diff_url": "https://github.com/huggingface/datasets/pull/6040.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6040.patch", "merged_at": "2023-07-17T10:16:03" }
was causing transformers CI to fail https://circleci.com/gh/huggingface/transformers/855105
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6040/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6039/comments
https://api.github.com/repos/huggingface/datasets/issues/6039/events
https://github.com/huggingface/datasets/issues/6039
1,806,508,451
I_kwDODunzps5rrSGj
6,039
Loading column subset from parquet file produces error since version 2.13
{ "login": "kklemon", "id": 1430243, "node_id": "MDQ6VXNlcjE0MzAyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kklemon", "html_url": "https://github.com/kklemon", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "organizations_url": "https://api.github.com/users/kklemon/orgs", "repos_url": "https://api.github.com/users/kklemon/repos", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "received_events_url": "https://api.github.com/users/kklemon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-07-16T09:13:07
2023-07-24T14:35:04
2023-07-24T14:35:04
NONE
null
null
null
### Describe the bug `load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error: ``` Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single for _, table in generator: File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables raise ValueError( ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}' ``` This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset. ### Steps to reproduce the bug ```python # Prepare some sample data import pandas as pd iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv') iris.to_parquet('iris.parquet') # ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'] print(iris.columns) # Load data with datasets from datasets import load_dataset # Load full parquet file dataset = load_dataset('parquet', data_files='iris.parquet') # Load column subset; throws error for datasets>=2.13 dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length']) ``` ### Expected behavior No error should be thrown and the given column subset should be loaded. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6039/timeline
null
completed
false