url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5416
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5416/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5416/comments
https://api.github.com/repos/huggingface/datasets/issues/5416/events
https://github.com/huggingface/datasets/pull/5416
1,526,988,113
PR_kwDODunzps5HDLmR
5,416
Fix RuntimeError: Sharding is ambiguous for this dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "By the way, do we know how many datasets are impacted by this issue?\r\n\r\nMaybe we should do a patch release with this fix.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009256 / 0.011353 (-0.002097) | 0.005033 / 0.011008 (-0.005975) | 0.099346 / 0.038508 (0.060838) | 0.035204 / 0.023109 (0.012095) | 0.303017 / 0.275898 (0.027119) | 0.335632 / 0.323480 (0.012152) | 0.007953 / 0.007986 (-0.000033) | 0.005806 / 0.004328 (0.001477) | 0.076121 / 0.004250 (0.071871) | 0.041164 / 0.037052 (0.004112) | 0.305536 / 0.258489 (0.047047) | 0.348452 / 0.293841 (0.054611) | 0.037704 / 0.128546 (-0.090842) | 0.011982 / 0.075646 (-0.063664) | 0.333264 / 0.419271 (-0.086008) | 0.047738 / 0.043533 (0.004205) | 0.310126 / 0.255139 (0.054987) | 0.318719 / 0.283200 (0.035519) | 0.098933 / 0.141683 (-0.042750) | 1.421058 / 1.452155 (-0.031096) | 1.554771 / 1.492716 (0.062054) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.258627 / 0.018006 (0.240620) | 0.450814 / 0.000490 (0.450324) | 0.011288 / 0.000200 (0.011088) | 0.000136 / 0.000054 (0.000081) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027004 / 0.037411 (-0.010407) | 0.109067 / 0.014526 (0.094541) | 0.120401 / 0.176557 (-0.056155) | 0.158336 / 0.737135 (-0.578799) | 0.126244 / 0.296338 (-0.170094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401847 / 0.215209 (0.186638) | 4.006003 / 2.077655 (1.928348) | 1.806342 / 1.504120 (0.302223) | 1.619751 / 1.541195 (0.078556) | 1.709660 / 1.468490 (0.241170) | 0.692444 / 4.584777 (-3.892333) | 3.853691 / 3.745712 (0.107979) | 2.143910 / 5.269862 (-3.125951) | 1.471600 / 4.565676 (-3.094076) | 0.084589 / 0.424275 (-0.339686) | 0.012276 / 0.007607 (0.004669) | 0.506529 / 0.226044 (0.280485) | 5.028361 / 2.268929 (2.759432) | 2.277660 / 55.444624 (-53.166964) | 1.930365 / 6.876477 (-4.946112) | 1.965494 / 2.142072 (-0.176579) | 0.826752 / 4.805227 (-3.978475) | 0.165050 / 6.500664 (-6.335614) | 0.062702 / 0.075469 (-0.012767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234539 / 1.841788 (-0.607249) | 15.067401 / 8.074308 (6.993093) | 14.041920 / 10.191392 (3.850528) | 0.162590 / 0.680424 (-0.517834) | 0.028941 / 0.534201 (-0.505260) | 0.438518 / 0.579283 (-0.140765) | 0.443787 / 0.434364 (0.009423) | 0.516671 / 0.540337 (-0.023666) | 0.609036 / 1.386936 (-0.777900) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007535 / 0.011353 (-0.003818) | 0.005283 / 0.011008 (-0.005725) | 0.097116 / 0.038508 (0.058608) | 0.033357 / 0.023109 (0.010247) | 0.383398 / 0.275898 (0.107500) | 0.425516 / 0.323480 (0.102037) | 0.006039 / 0.007986 (-0.001947) | 0.004074 / 0.004328 (-0.000255) | 0.073207 / 0.004250 (0.068956) | 0.052153 / 0.037052 (0.015101) | 0.386385 / 0.258489 (0.127896) | 0.429900 / 0.293841 (0.136059) | 0.038341 / 0.128546 (-0.090205) | 0.012417 / 0.075646 (-0.063230) | 0.333859 / 0.419271 (-0.085413) | 0.051157 / 0.043533 (0.007625) | 0.395022 / 0.255139 (0.139883) | 0.402462 / 0.283200 (0.119262) | 0.105207 / 0.141683 (-0.036475) | 1.510679 / 1.452155 (0.058524) | 1.584205 / 1.492716 (0.091489) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225805 / 0.018006 (0.207799) | 0.452109 / 0.000490 (0.451619) | 0.000429 / 0.000200 (0.000229) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029653 / 0.037411 (-0.007759) | 0.112609 / 0.014526 (0.098083) | 0.121828 / 0.176557 (-0.054728) | 0.159003 / 0.737135 (-0.578133) | 0.129306 / 0.296338 (-0.167033) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453001 / 0.215209 (0.237792) | 4.514882 / 2.077655 (2.437228) | 2.277494 / 1.504120 (0.773374) | 2.073870 / 1.541195 (0.532675) | 2.153346 / 1.468490 (0.684856) | 0.698363 / 4.584777 (-3.886414) | 3.921763 / 3.745712 (0.176051) | 2.123133 / 5.269862 (-3.146729) | 1.347618 / 4.565676 (-3.218058) | 0.085654 / 0.424275 (-0.338621) | 0.012059 / 0.007607 (0.004452) | 0.568183 / 0.226044 (0.342139) | 5.720047 / 2.268929 (3.451119) | 2.777973 / 55.444624 (-52.666651) | 2.453426 / 6.876477 (-4.423051) | 2.523977 / 2.142072 (0.381905) | 0.841979 / 4.805227 (-3.963248) | 0.167958 / 6.500664 (-6.332706) | 0.064929 / 0.075469 (-0.010540) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.235297 / 1.841788 (-0.606491) | 15.883598 / 8.074308 (7.809290) | 14.395328 / 10.191392 (4.203936) | 0.162401 / 0.680424 (-0.518022) | 0.017806 / 0.534201 (-0.516394) | 0.423853 / 0.579283 (-0.155430) | 0.423266 / 0.434364 (-0.011098) | 0.490351 / 0.540337 (-0.049986) | 0.588116 / 1.386936 (-0.798820) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bb3fbfa162bb4700e23d084826b4b7f6d97284be \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010759 / 0.011353 (-0.000594) | 0.005748 / 0.011008 (-0.005260) | 0.119195 / 0.038508 (0.080687) | 0.033476 / 0.023109 (0.010367) | 0.364081 / 0.275898 (0.088183) | 0.422456 / 0.323480 (0.098976) | 0.009780 / 0.007986 (0.001795) | 0.006170 / 0.004328 (0.001841) | 0.093242 / 0.004250 (0.088991) | 0.041049 / 0.037052 (0.003997) | 0.372132 / 0.258489 (0.113643) | 0.442501 / 0.293841 (0.148660) | 0.054889 / 0.128546 (-0.073657) | 0.018302 / 0.075646 (-0.057345) | 0.378899 / 0.419271 (-0.040373) | 0.058455 / 0.043533 (0.014922) | 0.356141 / 0.255139 (0.101002) | 0.400866 / 0.283200 (0.117666) | 0.103384 / 0.141683 (-0.038299) | 1.629867 / 1.452155 (0.177713) | 1.693939 / 1.492716 (0.201222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240484 / 0.018006 (0.222478) | 0.509137 / 0.000490 (0.508648) | 0.000450 / 0.000200 (0.000250) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025856 / 0.037411 (-0.011555) | 0.113214 / 0.014526 (0.098689) | 0.119420 / 0.176557 (-0.057136) | 0.158663 / 0.737135 (-0.578473) | 0.123542 / 0.296338 (-0.172797) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.555900 / 0.215209 (0.340691) | 5.580295 / 2.077655 (3.502640) | 2.216640 / 1.504120 (0.712520) | 1.904944 / 1.541195 (0.363749) | 1.865839 / 1.468490 (0.397349) | 1.158325 / 4.584777 (-3.426452) | 5.097420 / 3.745712 (1.351708) | 2.881775 / 5.269862 (-2.388087) | 2.068896 / 4.565676 (-2.496780) | 0.129028 / 0.424275 (-0.295247) | 0.014075 / 0.007607 (0.006468) | 0.698874 / 0.226044 (0.472830) | 7.131225 / 2.268929 (4.862296) | 2.901686 / 55.444624 (-52.542939) | 2.186146 / 6.876477 (-4.690330) | 2.251172 / 2.142072 (0.109100) | 1.342264 / 4.805227 (-3.462963) | 0.232045 / 6.500664 (-6.268619) | 0.073520 / 0.075469 (-0.001949) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.431314 / 1.841788 (-0.410474) | 16.313055 / 8.074308 (8.238747) | 18.451552 / 10.191392 (8.260160) | 0.232875 / 0.680424 (-0.447549) | 0.042170 / 0.534201 (-0.492031) | 0.495261 / 0.579283 (-0.084022) | 0.582901 / 0.434364 (0.148537) | 0.582049 / 0.540337 (0.041712) | 0.681122 / 1.386936 (-0.705814) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008131 / 0.011353 (-0.003222) | 0.006162 / 0.011008 (-0.004847) | 0.113721 / 0.038508 (0.075213) | 0.030797 / 0.023109 (0.007688) | 0.413108 / 0.275898 (0.137210) | 0.449968 / 0.323480 (0.126488) | 0.006126 / 0.007986 (-0.001860) | 0.004848 / 0.004328 (0.000519) | 0.085465 / 0.004250 (0.081214) | 0.045817 / 0.037052 (0.008764) | 0.419360 / 0.258489 (0.160871) | 0.489077 / 0.293841 (0.195236) | 0.050841 / 0.128546 (-0.077705) | 0.020646 / 0.075646 (-0.055000) | 0.379838 / 0.419271 (-0.039434) | 0.068897 / 0.043533 (0.025365) | 0.422182 / 0.255139 (0.167043) | 0.435529 / 0.283200 (0.152330) | 0.115299 / 0.141683 (-0.026384) | 1.655134 / 1.452155 (0.202979) | 1.835198 / 1.492716 (0.342481) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207041 / 0.018006 (0.189034) | 0.491263 / 0.000490 (0.490773) | 0.003554 / 0.000200 (0.003354) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030830 / 0.037411 (-0.006582) | 0.127003 / 0.014526 (0.112477) | 0.142901 / 0.176557 (-0.033656) | 0.177570 / 0.737135 (-0.559565) | 0.137758 / 0.296338 (-0.158580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.632820 / 0.215209 (0.417611) | 6.215535 / 2.077655 (4.137880) | 2.615310 / 1.504120 (1.111190) | 2.261431 / 1.541195 (0.720236) | 2.220570 / 1.468490 (0.752080) | 1.215820 / 4.584777 (-3.368957) | 5.247680 / 3.745712 (1.501968) | 3.120054 / 5.269862 (-2.149807) | 1.950947 / 4.565676 (-2.614730) | 0.149980 / 0.424275 (-0.274295) | 0.015241 / 0.007607 (0.007634) | 0.879714 / 0.226044 (0.653670) | 7.941913 / 2.268929 (5.672984) | 3.512456 / 55.444624 (-51.932168) | 2.693833 / 6.876477 (-4.182644) | 2.772780 / 2.142072 (0.630708) | 1.459581 / 4.805227 (-3.345646) | 0.264820 / 6.500664 (-6.235844) | 0.076698 / 0.075469 (0.001228) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.437719 / 1.841788 (-0.404068) | 16.750309 / 8.074308 (8.676001) | 18.646776 / 10.191392 (8.455384) | 0.227858 / 0.680424 (-0.452566) | 0.024239 / 0.534201 (-0.509962) | 0.486172 / 0.579283 (-0.093111) | 0.574731 / 0.434364 (0.140367) | 0.557776 / 0.540337 (0.017439) | 0.672921 / 1.386936 (-0.714015) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bb3fbfa162bb4700e23d084826b4b7f6d97284be \"CML watermark\")\n" ]
"2023-01-10T08:43:19Z"
"2023-01-18T17:12:17Z"
"2023-01-18T14:09:02Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5416.diff", "html_url": "https://github.com/huggingface/datasets/pull/5416", "merged_at": "2023-01-18T14:09:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/5416.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5416" }
This PR fixes the RuntimeError: Sharding is ambiguous for this dataset. The error for ambiguous sharding will be raised only if num_proc > 1. Fix #5415, fix #5414. Fix https://huggingface.co/datasets/ami/discussions/3.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5416/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5416/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6213/comments
https://api.github.com/repos/huggingface/datasets/issues/6213/events
https://github.com/huggingface/datasets/pull/6213
1,880,592,987
PR_kwDODunzps5ZgHLO
6,213
Better list array values handling in cast/embed storage
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008451 / 0.011353 (-0.002902) | 0.005056 / 0.011008 (-0.005952) | 0.086367 / 0.038508 (0.047859) | 0.068030 / 0.023109 (0.044920) | 0.358812 / 0.275898 (0.082914) | 0.385790 / 0.323480 (0.062310) | 0.005608 / 0.007986 (-0.002378) | 0.004262 / 0.004328 (-0.000067) | 0.066618 / 0.004250 (0.062368) | 0.053901 / 0.037052 (0.016849) | 0.398456 / 0.258489 (0.139967) | 0.391681 / 0.293841 (0.097840) | 0.046743 / 0.128546 (-0.081804) | 0.014118 / 0.075646 (-0.061528) | 0.308479 / 0.419271 (-0.110793) | 0.064214 / 0.043533 (0.020681) | 0.367940 / 0.255139 (0.112801) | 0.387204 / 0.283200 (0.104004) | 0.036093 / 0.141683 (-0.105590) | 1.534182 / 1.452155 (0.082027) | 1.598357 / 1.492716 (0.105641) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265910 / 0.018006 (0.247904) | 0.589453 / 0.000490 (0.588963) | 0.004881 / 0.000200 (0.004681) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032540 / 0.037411 (-0.004872) | 0.083153 / 0.014526 (0.068627) | 0.098960 / 0.176557 (-0.077597) | 0.162044 / 0.737135 (-0.575091) | 0.093602 / 0.296338 (-0.202736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517056 / 0.215209 (0.301847) | 5.167908 / 2.077655 (3.090253) | 2.359856 / 1.504120 (0.855736) | 2.092448 / 1.541195 (0.551253) | 2.100270 / 1.468490 (0.631780) | 0.742321 / 4.584777 (-3.842456) | 4.845010 / 3.745712 (1.099298) | 4.361808 / 5.269862 (-0.908054) | 2.621941 / 4.565676 (-1.943736) | 0.094907 / 0.424275 (-0.329369) | 0.009357 / 0.007607 (0.001750) | 0.719859 / 0.226044 (0.493814) | 6.929731 / 2.268929 (4.660802) | 3.240862 / 55.444624 (-52.203763) | 2.700817 / 6.876477 (-4.175659) | 2.904600 / 2.142072 (0.762527) | 0.924930 / 4.805227 (-3.880298) | 0.194390 / 6.500664 (-6.306274) | 0.078331 / 0.075469 (0.002862) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.539347 / 1.841788 (-0.302441) | 22.696358 / 8.074308 (14.622050) | 18.791692 / 10.191392 (8.600300) | 0.221376 / 0.680424 (-0.459048) | 0.029824 / 0.534201 (-0.504377) | 0.455604 / 0.579283 (-0.123679) | 0.573169 / 0.434364 (0.138805) | 0.507109 / 0.540337 (-0.033228) | 0.730986 / 1.386936 (-0.655950) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009308 / 0.011353 (-0.002045) | 0.005027 / 0.011008 (-0.005982) | 0.074094 / 0.038508 (0.035586) | 0.068277 / 0.023109 (0.045168) | 0.412716 / 0.275898 (0.136818) | 0.446883 / 0.323480 (0.123403) | 0.005864 / 0.007986 (-0.002122) | 0.003753 / 0.004328 (-0.000575) | 0.072575 / 0.004250 (0.068325) | 0.064434 / 0.037052 (0.027382) | 0.445395 / 0.258489 (0.186906) | 0.464520 / 0.293841 (0.170679) | 0.045303 / 0.128546 (-0.083243) | 0.013120 / 0.075646 (-0.062527) | 0.077830 / 0.419271 (-0.341441) | 0.057303 / 0.043533 (0.013770) | 0.420845 / 0.255139 (0.165706) | 0.431308 / 0.283200 (0.148109) | 0.033908 / 0.141683 (-0.107775) | 1.577667 / 1.452155 (0.125512) | 1.677321 / 1.492716 (0.184604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305855 / 0.018006 (0.287849) | 0.601442 / 0.000490 (0.600953) | 0.010722 / 0.000200 (0.010522) | 0.000158 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029202 / 0.037411 (-0.008209) | 0.094576 / 0.014526 (0.080050) | 0.106734 / 0.176557 (-0.069822) | 0.168114 / 0.737135 (-0.569021) | 0.107241 / 0.296338 (-0.189098) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643634 / 0.215209 (0.428425) | 6.391757 / 2.077655 (4.314103) | 3.011679 / 1.504120 (1.507559) | 2.379711 / 1.541195 (0.838517) | 2.387444 / 1.468490 (0.918954) | 0.823460 / 4.584777 (-3.761317) | 4.882240 / 3.745712 (1.136528) | 4.091170 / 5.269862 (-1.178691) | 2.688761 / 4.565676 (-1.876915) | 0.094555 / 0.424275 (-0.329720) | 0.008464 / 0.007607 (0.000857) | 0.665949 / 0.226044 (0.439905) | 6.948237 / 2.268929 (4.679309) | 3.384894 / 55.444624 (-52.059730) | 2.675570 / 6.876477 (-4.200907) | 3.073045 / 2.142072 (0.930973) | 0.969780 / 4.805227 (-3.835447) | 0.205859 / 6.500664 (-6.294805) | 0.072548 / 0.075469 (-0.002922) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.563869 / 1.841788 (-0.277919) | 22.431392 / 8.074308 (14.357084) | 19.434811 / 10.191392 (9.243419) | 0.255135 / 0.680424 (-0.425289) | 0.027799 / 0.534201 (-0.506402) | 0.427713 / 0.579283 (-0.151570) | 0.527030 / 0.434364 (0.092666) | 0.503660 / 0.540337 (-0.036678) | 0.730996 / 1.386936 (-0.655940) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#06c1940953807dbde4bc18af64bd3d87234edf00 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007597 / 0.011353 (-0.003756) | 0.004492 / 0.011008 (-0.006516) | 0.103779 / 0.038508 (0.065271) | 0.079287 / 0.023109 (0.056178) | 0.389651 / 0.275898 (0.113753) | 0.421955 / 0.323480 (0.098475) | 0.006023 / 0.007986 (-0.001963) | 0.003727 / 0.004328 (-0.000602) | 0.078604 / 0.004250 (0.074354) | 0.060810 / 0.037052 (0.023758) | 0.412170 / 0.258489 (0.153681) | 0.436218 / 0.293841 (0.142377) | 0.037282 / 0.128546 (-0.091264) | 0.010341 / 0.075646 (-0.065305) | 0.357652 / 0.419271 (-0.061620) | 0.063320 / 0.043533 (0.019788) | 0.389454 / 0.255139 (0.134315) | 0.433073 / 0.283200 (0.149874) | 0.028449 / 0.141683 (-0.113234) | 1.894107 / 1.452155 (0.441952) | 1.954190 / 1.492716 (0.461474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224477 / 0.018006 (0.206471) | 0.510878 / 0.000490 (0.510388) | 0.005013 / 0.000200 (0.004813) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032976 / 0.037411 (-0.004436) | 0.101073 / 0.014526 (0.086547) | 0.113990 / 0.176557 (-0.062566) | 0.183499 / 0.737135 (-0.553636) | 0.114283 / 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473242 / 0.215209 (0.258033) | 4.719800 / 2.077655 (2.642146) | 2.318732 / 1.504120 (0.814612) | 2.102336 / 1.541195 (0.561141) | 2.143618 / 1.468490 (0.675128) | 0.594122 / 4.584777 (-3.990654) | 4.265961 / 3.745712 (0.520249) | 3.794635 / 5.269862 (-1.475226) | 2.394506 / 4.565676 (-2.171170) | 0.070091 / 0.424275 (-0.354184) | 0.009222 / 0.007607 (0.001614) | 0.564496 / 0.226044 (0.338452) | 5.644348 / 2.268929 (3.375419) | 2.934395 / 55.444624 (-52.510229) | 2.429076 / 6.876477 (-4.447401) | 2.592010 / 2.142072 (0.449937) | 0.713371 / 4.805227 (-4.091856) | 0.165019 / 6.500664 (-6.335646) | 0.075913 / 0.075469 (0.000444) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.570836 / 1.841788 (-0.270951) | 22.569763 / 8.074308 (14.495455) | 17.159658 / 10.191392 (6.968266) | 0.185716 / 0.680424 (-0.494708) | 0.021938 / 0.534201 (-0.512263) | 0.487204 / 0.579283 (-0.092079) | 0.472776 / 0.434364 (0.038412) | 0.565052 / 0.540337 (0.024714) | 0.763322 / 1.386936 (-0.623614) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007810 / 0.011353 (-0.003543) | 0.005140 / 0.011008 (-0.005869) | 0.079018 / 0.038508 (0.040510) | 0.080899 / 0.023109 (0.057790) | 0.489213 / 0.275898 (0.213315) | 0.525334 / 0.323480 (0.201854) | 0.006992 / 0.007986 (-0.000994) | 0.003729 / 0.004328 (-0.000599) | 0.079277 / 0.004250 (0.075026) | 0.064883 / 0.037052 (0.027831) | 0.496718 / 0.258489 (0.238229) | 0.534976 / 0.293841 (0.241135) | 0.038790 / 0.128546 (-0.089756) | 0.010122 / 0.075646 (-0.065524) | 0.087669 / 0.419271 (-0.331603) | 0.057959 / 0.043533 (0.014426) | 0.490611 / 0.255139 (0.235472) | 0.518376 / 0.283200 (0.235176) | 0.026561 / 0.141683 (-0.115122) | 1.843241 / 1.452155 (0.391086) | 1.952367 / 1.492716 (0.459651) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289799 / 0.018006 (0.271792) | 0.486999 / 0.000490 (0.486509) | 0.017481 / 0.000200 (0.017281) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037662 / 0.037411 (0.000250) | 0.113238 / 0.014526 (0.098712) | 0.123918 / 0.176557 (-0.052638) | 0.190484 / 0.737135 (-0.546652) | 0.126473 / 0.296338 (-0.169865) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.530622 / 0.215209 (0.315413) | 5.292093 / 2.077655 (3.214438) | 2.819354 / 1.504120 (1.315234) | 2.609821 / 1.541195 (1.068626) | 2.680090 / 1.468490 (1.211600) | 0.603490 / 4.584777 (-3.981287) | 4.344541 / 3.745712 (0.598828) | 3.874001 / 5.269862 (-1.395861) | 2.445302 / 4.565676 (-2.120375) | 0.071173 / 0.424275 (-0.353102) | 0.009131 / 0.007607 (0.001524) | 0.627273 / 0.226044 (0.401229) | 6.278637 / 2.268929 (4.009709) | 3.433762 / 55.444624 (-52.010862) | 2.973400 / 6.876477 (-3.903077) | 3.188165 / 2.142072 (1.046093) | 0.722824 / 4.805227 (-4.082404) | 0.165154 / 6.500664 (-6.335510) | 0.075268 / 0.075469 (-0.000202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.652994 / 1.841788 (-0.188794) | 23.309030 / 8.074308 (15.234722) | 18.135649 / 10.191392 (7.944257) | 0.177543 / 0.680424 (-0.502881) | 0.024784 / 0.534201 (-0.509417) | 0.489952 / 0.579283 (-0.089331) | 0.485368 / 0.434364 (0.051004) | 0.580583 / 0.540337 (0.040246) | 0.787843 / 1.386936 (-0.599093) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5982039f7814a204fe532240ca6aabe72430d834 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "A bug in `FixedSizeArray.flatten` in `PyArrow<10.0.0` makes CI fail. Colab installs 9.0.0 by default, so we should be able to set the minimal version to `10.0.0` soon. Keeping this PR as a draft in the meantime.", "Closing this PR in favor of https://github.com/huggingface/datasets/pull/6283" ]
"2023-09-04T16:21:23Z"
"2023-10-05T15:25:05Z"
"2023-10-05T15:24:34Z"
CONTRIBUTOR
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6213.diff", "html_url": "https://github.com/huggingface/datasets/pull/6213", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6213.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6213" }
Use [`array.flatten`](https://arrow.apache.org/docs/python/generated/pyarrow.ListArray.html#pyarrow.ListArray.flatten) that takes `.offset` into account instead of `array.values` in array cast/embed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6213/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6213/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4962/comments
https://api.github.com/repos/huggingface/datasets/issues/4962/events
https://github.com/huggingface/datasets/pull/4962
1,368,155,365
PR_kwDODunzps4-sh-o
4,962
Update setup.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4", "events_url": "https://api.github.com/users/DCNemesis/events{/privacy}", "followers_url": "https://api.github.com/users/DCNemesis/followers", "following_url": "https://api.github.com/users/DCNemesis/following{/other_user}", "gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DCNemesis", "id": 3616964, "login": "DCNemesis", "node_id": "MDQ6VXNlcjM2MTY5NjQ=", "organizations_url": "https://api.github.com/users/DCNemesis/orgs", "received_events_url": "https://api.github.com/users/DCNemesis/received_events", "repos_url": "https://api.github.com/users/DCNemesis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions", "type": "User", "url": "https://api.github.com/users/DCNemesis" }
[]
closed
false
null
[]
null
[ "Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247", "Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue." ]
"2022-09-09T17:57:56Z"
"2022-09-12T14:33:04Z"
"2022-09-12T14:33:04Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4962.diff", "html_url": "https://github.com/huggingface/datasets/pull/4962", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4962.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4962" }
exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4962/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5984/comments
https://api.github.com/repos/huggingface/datasets/issues/5984/events
https://github.com/huggingface/datasets/issues/5984
1,771,571,458
I_kwDODunzps5pmAkC
5,984
AutoSharding IterableDataset's when num_workers > 1
{ "avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4", "events_url": "https://api.github.com/users/mathephysicist/events{/privacy}", "followers_url": "https://api.github.com/users/mathephysicist/followers", "following_url": "https://api.github.com/users/mathephysicist/following{/other_user}", "gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mathephysicist", "id": 25594384, "login": "mathephysicist", "node_id": "MDQ6VXNlcjI1NTk0Mzg0", "organizations_url": "https://api.github.com/users/mathephysicist/orgs", "received_events_url": "https://api.github.com/users/mathephysicist/received_events", "repos_url": "https://api.github.com/users/mathephysicist/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions", "type": "User", "url": "https://api.github.com/users/mathephysicist" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n\r\n@lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n\r\nPS: I don't expect significant speed-up for local, uncompressed Arrow files.", "Alternatively we could support multiprocessing map for iterable datasets and let the user do the CPU intensive task there ?\r\n\r\nThis way it would work on arrow data but also on any iterable dataset", "> For this to be possible, we would have to switch from the \"Streaming\" Arrow format to the \"Random Access\" (IPC/Feather) format, which allows reading arbitrary record batches (explained [here](https://arrow.apache.org/docs/python/ipc.html)). We could then use these batches to construct shards.\r\n> \r\n> @lhoestq @albertvillanova Do you think this use case is worth the switch? Also, we currently shard files, not inner row groups/chunks. Should we also support sharding row groups (e.g. if the number of input files is 1)?\r\n> \r\n> PS: I don't expect significant speed-up for local, uncompressed Arrow files.\r\n\r\nCould you explain why you'd need to change the arrow format?\r\n\r\nWhen we use streaming datasets we simply determine the number of worker shards and then add some modulo logic at the appropriate place. Worst case scenario, you'd skip streaming entries according to the number of shards.\r\n\r\nFor PyTorch, I'd be happy to provide an implementation or a sketch thereof, if you point me toward what the testing requirements would be for such a PR.", "> Could you explain why you'd need to change the arrow format?\r\n\r\nThis way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.", "> > Could you explain why you'd need to change the arrow format?\r\n> \r\n> This way workers have random access to the location of the file where its dataset subset starts. Currently we're using the Arrow streaming format which doesn't include the metadata of the record batches offsets. This is needed here to efficiently split a dataset made of one single file.\r\n\r\nI guess I don't understand why you'd need to subset the dataset in the first place. \r\nIt seems sufficient to figure out how to offset or skip rows.\r\n\r\nFor instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\nThat's one way to do it, where of course you'd need to account for gpu sharding as well.\r\n\r\n\r\nOtherwise, how did you implement worker/node/GPU sharding for iterable/streaming data where you do not have index information or prior splits (e.g. files)?", "> For instance, using pyArrow, you could use RecordBatchStreamReader to zero-copy iterate over records with read_next_batch and then only initiate the next step for records modulo worker shard.\r\n\r\nThat works indeed ! And what we meant is that you can make it even faster to instantiate. Indeed using RecordBatchStreamReader you need to get the list of all the record batches in each worker, whereas you could just get the list of record batches per worker if you use the record batches locations in the Arrow IPC file footer. This would be especially appreciated to have a fast instantiation in case you have tens of thousands of Arrow files for example.", "Any recent updates on this ? " ]
"2023-06-23T14:34:20Z"
"2023-12-08T09:04:04Z"
null
NONE
null
null
null
### Feature request Minimal Example ``` import torch from datasets import IterableDataset d = IterableDataset.from_file(<file_name>) dl = torch.utils.data.dataloader.DataLoader(d,num_workers=3) for sample in dl: print(sample) ``` Warning: Too many dataloader workers: 2 (max is dataset.n_shards=1). Stopping 1 dataloader workers. To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=1. To enable more parallelism, please split the dataset in more files than 1. Expected Behavior: Dataset is sharded each cpu uses subset (contiguously - so you can do checkpoint loading/saving) ### Motivation I have a lot of unused cpu's and would like to be able to shard iterable datasets with pytorch's dataloader when num_workers > 1. This is for a very large single file. I am aware that we can use the `split_dataset_by_node` to ensure that each node (for distributed) gets different shards, but we should extend it so that this also continues for multiple workers. ### Your contribution If someone points me to what needs to change, I can create a PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5984/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5984/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/919/comments
https://api.github.com/repos/huggingface/datasets/issues/919/events
https://github.com/huggingface/datasets/issues/919
753,434,472
MDU6SXNzdWU3NTM0MzQ0NzI=
919
wrong length with datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
"2020-11-30T12:23:39Z"
"2020-11-30T12:37:27Z"
"2020-11-30T12:37:26Z"
CONTRIBUTOR
null
null
null
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/919/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3679/comments
https://api.github.com/repos/huggingface/datasets/issues/3679/events
https://github.com/huggingface/datasets/issues/3679
1,124,062,133
I_kwDODunzps5C_9O1
3,679
Download datasets from a private hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/3436143?v=4", "events_url": "https://api.github.com/users/juliensimon/events{/privacy}", "followers_url": "https://api.github.com/users/juliensimon/followers", "following_url": "https://api.github.com/users/juliensimon/following{/other_user}", "gists_url": "https://api.github.com/users/juliensimon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juliensimon", "id": 3436143, "login": "juliensimon", "node_id": "MDQ6VXNlcjM0MzYxNDM=", "organizations_url": "https://api.github.com/users/juliensimon/orgs", "received_events_url": "https://api.github.com/users/juliensimon/received_events", "repos_url": "https://api.github.com/users/juliensimon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juliensimon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliensimon/subscriptions", "type": "User", "url": "https://api.github.com/users/juliensimon" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "A929D8", "default": false, "description": "", "id": 3814924348, "name": "private-hub", "node_id": "LA_kwDODunzps7jYyA8", "url": "https://api.github.com/repos/huggingface/datasets/labels/private-hub" } ]
closed
false
null
[]
null
[ "For reference:\r\nhttps://github.com/huggingface/transformers/issues/15514\r\nhttps://github.com/huggingface/huggingface_hub/issues/650", "Hi ! For information one can set the environment variable `HF_ENDPOINT` (default is `https://huggingface.co`) if they want to use a private hub.\r\n\r\nWe may need to coordinate with the other libraries to have a consistent way of changing the hub endpoint", "Yes, I tested it successfully this morning. Thanks." ]
"2022-02-04T10:49:06Z"
"2022-02-22T11:08:07Z"
"2022-02-22T11:08:07Z"
NONE
null
null
null
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. The same issue exists with the transformers library and the CLI. I'm going to create issues there as well, and I'll reference them below.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3679/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1078/comments
https://api.github.com/repos/huggingface/datasets/issues/1078/events
https://github.com/huggingface/datasets/pull/1078
756,633,215
MDExOlB1bGxSZXF1ZXN0NTMyMTUyMzgx
1,078
add AJGT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
[]
closed
false
null
[]
null
[]
"2020-12-03T22:16:31Z"
"2020-12-04T09:55:15Z"
"2020-12-04T09:55:15Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1078.diff", "html_url": "https://github.com/huggingface/datasets/pull/1078", "merged_at": "2020-12-04T09:55:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1078.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1078" }
Arabic Jordanian General Tweets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1078/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1078/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2549/comments
https://api.github.com/repos/huggingface/datasets/issues/2549/events
https://github.com/huggingface/datasets/issues/2549
929,819,093
MDU6SXNzdWU5Mjk4MTkwOTM=
2,549
Handling unlabeled datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nelson-liu", "id": 7272031, "login": "nelson-liu", "node_id": "MDQ6VXNlcjcyNzIwMzE=", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "repos_url": "https://api.github.com/users/nelson-liu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "type": "User", "url": "https://api.github.com/users/nelson-liu" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L77), you can see how the Features were originally specified. \r\n\r\nFeel free to use it as a template, customize it and pass it to `load_dataset` using the parameter `features`.", "ah got it, thanks!" ]
"2021-06-25T04:32:23Z"
"2021-06-25T21:07:57Z"
"2021-06-25T21:07:56Z"
NONE
null
null
null
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error: ``` File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example return encode_nested_example(self, example) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example return schema.encode_example(obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example if not -1 <= example_data < self.num_classes: TypeError: '<=' not supported between instances of 'int' and 'NoneType' ``` What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2549/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5298/comments
https://api.github.com/repos/huggingface/datasets/issues/5298/events
https://github.com/huggingface/datasets/issues/5298
1,464,681,871
I_kwDODunzps5XTUWP
5,298
Bug in xopen with Windows pathnames
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2022-11-25T15:21:32Z"
"2022-11-29T08:21:25Z"
"2022-11-29T08:21:25Z"
MEMBER
null
null
null
Currently, `xopen` function has a bug with local Windows pathnames: From its implementation: ```python def xopen(file: str, mode="r", *args, **kwargs): file = _as_posix(PurePath(file)) main_hop, *rest_hops = file.split("::") if is_local_path(main_hop): return open(file, mode, *args, **kwargs) ``` On a Windows machine, if we pass the argument: ```python xopen("C:\\Users\\USERNAME\\filename.txt") ``` it returns ```python open("C:/Users/USERNAME/filename.txt") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5298/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2270/comments
https://api.github.com/repos/huggingface/datasets/issues/2270/events
https://github.com/huggingface/datasets/pull/2270
868,913,660
MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky
2,270
Fix iterable interface expected by numpy
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is already on master" ]
"2021-04-27T14:35:56Z"
"2021-04-28T17:39:27Z"
"2021-04-28T17:39:27Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2270.diff", "html_url": "https://github.com/huggingface/datasets/pull/2270", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2270.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2270" }
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2270/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2270/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1598/comments
https://api.github.com/repos/huggingface/datasets/issues/1598/events
https://github.com/huggingface/datasets/pull/1598
770,332,440
MDExOlB1bGxSZXF1ZXN0NTQyMDk2NTM4
1,598
made suggested changes in fake-news-english
{ "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}", "followers_url": "https://api.github.com/users/MisbahKhan789/followers", "following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}", "gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MisbahKhan789", "id": 15351802, "login": "MisbahKhan789", "node_id": "MDQ6VXNlcjE1MzUxODAy", "organizations_url": "https://api.github.com/users/MisbahKhan789/orgs", "received_events_url": "https://api.github.com/users/MisbahKhan789/received_events", "repos_url": "https://api.github.com/users/MisbahKhan789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions", "type": "User", "url": "https://api.github.com/users/MisbahKhan789" }
[]
closed
false
null
[]
null
[]
"2020-12-17T20:06:29Z"
"2020-12-18T09:43:58Z"
"2020-12-18T09:43:57Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1598.diff", "html_url": "https://github.com/huggingface/datasets/pull/1598", "merged_at": "2020-12-18T09:43:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1598.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1598" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1598/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1598/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2805/comments
https://api.github.com/repos/huggingface/datasets/issues/2805/events
https://github.com/huggingface/datasets/pull/2805
971,436,456
MDExOlB1bGxSZXF1ZXN0NzEzMTc3MTI4
2,805
Fix streaming zip files from canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-08-16T07:11:40Z"
"2021-08-16T10:34:00Z"
"2021-08-16T10:34:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2805.diff", "html_url": "https://github.com/huggingface/datasets/pull/2805", "merged_at": "2021-08-16T10:34:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/2805.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2805" }
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This PR fixes this issue and allows streaming zip files both from: - canonical datasets scripts and - data files.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2805/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2805/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1398/comments
https://api.github.com/repos/huggingface/datasets/issues/1398/events
https://github.com/huggingface/datasets/pull/1398
760,497,024
MDExOlB1bGxSZXF1ZXN0NTM1MzE4NTg5
1,398
Add Neural Code Search Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/34424769?v=4", "events_url": "https://api.github.com/users/vinaykudari/events{/privacy}", "followers_url": "https://api.github.com/users/vinaykudari/followers", "following_url": "https://api.github.com/users/vinaykudari/following{/other_user}", "gists_url": "https://api.github.com/users/vinaykudari/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vinaykudari", "id": 34424769, "login": "vinaykudari", "node_id": "MDQ6VXNlcjM0NDI0NzY5", "organizations_url": "https://api.github.com/users/vinaykudari/orgs", "received_events_url": "https://api.github.com/users/vinaykudari/received_events", "repos_url": "https://api.github.com/users/vinaykudari/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vinaykudari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinaykudari/subscriptions", "type": "User", "url": "https://api.github.com/users/vinaykudari" }
[]
closed
false
null
[]
null
[ "@lhoestq Refactored into new branch, please review :) ", "The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
"2020-12-09T16:52:16Z"
"2020-12-09T18:02:27Z"
"2020-12-09T18:02:27Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1398.diff", "html_url": "https://github.com/huggingface/datasets/pull/1398", "merged_at": "2020-12-09T18:02:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/1398.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1398" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1398/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1398/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5903/comments
https://api.github.com/repos/huggingface/datasets/issues/5903/events
https://github.com/huggingface/datasets/pull/5903
1,727,372,549
PR_kwDODunzps5RbV82
5,903
Relax `ci.yml` trigger for `pull_request` based on modified paths
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
open
false
null
[]
null
[ "Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint.", "Maybe we can add\r\n```python\r\npaths-ignore:\r\n - \"docs/**\"\r\n```\r\nto `ci.yml` and `benchmarks.yml`. The other supporting files are not modified often, so leaving them out is fine." ]
"2023-05-26T10:46:52Z"
"2023-09-07T15:52:36Z"
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5903.diff", "html_url": "https://github.com/huggingface/datasets/pull/5903", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5903" }
## What's in this PR? As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed. ## What's pending in this PR? I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/324/comments
https://api.github.com/repos/huggingface/datasets/issues/324/events
https://github.com/huggingface/datasets/issues/324
647,525,725
MDU6SXNzdWU2NDc1MjU3MjU=
324
Error when calculating glue score
{ "avatar_url": "https://avatars.githubusercontent.com/u/47185867?v=4", "events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/events{/privacy}", "followers_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/followers", "following_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/following{/other_user}", "gists_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/D-i-l-r-u-k-s-h-i", "id": 47185867, "login": "D-i-l-r-u-k-s-h-i", "node_id": "MDQ6VXNlcjQ3MTg1ODY3", "organizations_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/orgs", "received_events_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/received_events", "repos_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/repos", "site_admin": false, "starred_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i/subscriptions", "type": "User", "url": "https://api.github.com/users/D-i-l-r-u-k-s-h-i" }
[]
closed
false
null
[]
null
[ "The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.", "I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```", "MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).", "Closing this one. Feel free to re-open if you have other questions :)" ]
"2020-06-29T16:53:48Z"
"2020-07-09T09:13:34Z"
"2020-07-09T09:13:34Z"
NONE
null
null
null
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/324/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3629/comments
https://api.github.com/repos/huggingface/datasets/issues/3629/events
https://github.com/huggingface/datasets/pull/3629
1,113,971,575
PR_kwDODunzps4xkCZA
3,629
Fix Hub repos update when there's a new release
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2022-01-25T14:39:45Z"
"2022-01-25T14:55:46Z"
"2022-01-25T14:55:46Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3629.diff", "html_url": "https://github.com/huggingface/datasets/pull/3629", "merged_at": "2022-01-25T14:55:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3629.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3629" }
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3629/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/947/comments
https://api.github.com/repos/huggingface/datasets/issues/947/events
https://github.com/huggingface/datasets/pull/947
754,286,658
MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3
947
Add europeana newspapers
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[]
"2020-12-01T10:52:18Z"
"2020-12-02T09:42:35Z"
"2020-12-02T09:42:09Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/947.diff", "html_url": "https://github.com/huggingface/datasets/pull/947", "merged_at": "2020-12-02T09:42:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/947.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/947" }
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/947/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/947/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5488/comments
https://api.github.com/repos/huggingface/datasets/issues/5488/events
https://github.com/huggingface/datasets/issues/5488
1,565,025,262
I_kwDODunzps5dSGPu
5,488
Error loading MP3 files from CommonVoice
{ "avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4", "events_url": "https://api.github.com/users/kradonneoh/events{/privacy}", "followers_url": "https://api.github.com/users/kradonneoh/followers", "following_url": "https://api.github.com/users/kradonneoh/following{/other_user}", "gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kradonneoh", "id": 110259722, "login": "kradonneoh", "node_id": "U_kgDOBpJuCg", "organizations_url": "https://api.github.com/users/kradonneoh/orgs", "received_events_url": "https://api.github.com/users/kradonneoh/received_events", "repos_url": "https://api.github.com/users/kradonneoh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions", "type": "User", "url": "https://api.github.com/users/kradonneoh" }
[]
closed
false
null
[]
null
[ "Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the problem persists after having followed them.", "I saw that and have followed it (hence the Expected Behavior section of the bug report). \r\n\r\nIs there no intention of updating to the latest version? It does limit the version of `torch` I can use, which isn’t ideal.", "@kradonneoh hey! actually with `ffmpeg4` loading of mp3 files should work, so this is a not expected behavior and we need to investigate it. It works on my side with `torchaudio==0.13` and `ffmpeg==4.2.7`. Which `torchaudio` version do you use?\r\n\r\n`datasets` should support decoding of mp3 files with `torchaudio` when its version is `>0.12` but as you noted it requires `ffmpeg>4`, we need to fix this in the documentation, thank you for pointing to this! \r\n\r\nBut according to your traceback it seems that it tries to use [`libsndfile`](https://github.com/libsndfile/libsndfile) backend for mp3 decoding. And `libsndfile` library supports mp3 decoding starting from version 1.1.0 which on Linux has to be compiled from source for now afaik. \r\n\r\nfyi - we are aiming at getting rid of `torchaudio` dependency at all by the next major library release in favor of `libsndfile` too.", "We now decode MP3 with `soundfile`, so I'm closing this issue" ]
"2023-01-31T21:25:33Z"
"2023-03-02T16:25:14Z"
"2023-03-02T16:25:13Z"
NONE
null
null
null
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file) 310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed) --> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file) 312 except RuntimeError: ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file) 351 --> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 353 if self.sampling_rate and self.sampling_rate != sampling_rate: ~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 204 """ --> 205 with soundfile.SoundFile(filepath, "r") as file_: 206 if file_.format != "WAV" or normalize: ~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 654 format, subtype, endian) --> 655 self._file = self._open(file, mode_int, closefd) 656 if set(mode).issuperset('r+') and self.seekable(): ~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd) 1212 err = _snd.sf_error(file_ptr) -> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) 1214 if mode_int == _snd.SFM_WRITE: LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format. ``` I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889). ### Steps to reproduce the bug ```python dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train") dataset[0] ``` ### Expected behavior Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError` ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5488/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2508/comments
https://api.github.com/repos/huggingface/datasets/issues/2508/events
https://github.com/huggingface/datasets/issues/2508
921,863,173
MDU6SXNzdWU5MjE4NjMxNzM=
2,508
Load Image Classification Dataset from Local
{ "avatar_url": "https://avatars.githubusercontent.com/u/8428198?v=4", "events_url": "https://api.github.com/users/Jacobsolawetz/events{/privacy}", "followers_url": "https://api.github.com/users/Jacobsolawetz/followers", "following_url": "https://api.github.com/users/Jacobsolawetz/following{/other_user}", "gists_url": "https://api.github.com/users/Jacobsolawetz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jacobsolawetz", "id": 8428198, "login": "Jacobsolawetz", "node_id": "MDQ6VXNlcjg0MjgxOTg=", "organizations_url": "https://api.github.com/users/Jacobsolawetz/orgs", "received_events_url": "https://api.github.com/users/Jacobsolawetz/received_events", "repos_url": "https://api.github.com/users/Jacobsolawetz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jacobsolawetz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jacobsolawetz/subscriptions", "type": "User", "url": "https://api.github.com/users/Jacobsolawetz" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }, { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "Hi ! Is this folder structure a standard, a bit like imagenet ?\r\nIn this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmy_custom_cifar = load_dataset(\"cifar_like\", data_dir=\"path/to/data/dir\")\r\n```\r\n\r\nLet me know what you think", "Yep that would be sweet - closing for now as we found a workaround. ", "@lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing.", "Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/image-folder', data_files='PetImages/')\r\n```", "Bumping this one following our recent discussion @mariosasko @nateraw :)" ]
"2021-06-15T22:43:33Z"
"2022-03-01T16:29:44Z"
"2022-03-01T16:29:44Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10". **Describe alternatives you've considered** Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path) Write custom data loader logic **Additional context** We're training ViT on custom dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2508/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2508/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1344/comments
https://api.github.com/repos/huggingface/datasets/issues/1344/events
https://github.com/huggingface/datasets/pull/1344
759,831,925
MDExOlB1bGxSZXF1ZXN0NTM0NzY2ODIy
1,344
Add hausa ner corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dadelani", "id": 23586676, "login": "dadelani", "node_id": "MDQ6VXNlcjIzNTg2Njc2", "organizations_url": "https://api.github.com/users/dadelani/orgs", "received_events_url": "https://api.github.com/users/dadelani/received_events", "repos_url": "https://api.github.com/users/dadelani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "type": "User", "url": "https://api.github.com/users/dadelani" }
[]
closed
false
null
[]
null
[]
"2020-12-08T22:25:04Z"
"2020-12-08T23:11:55Z"
"2020-12-08T23:11:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1344.diff", "html_url": "https://github.com/huggingface/datasets/pull/1344", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1344.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1344" }
Added Hausa VOA NER data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1344/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1344/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6347/comments
https://api.github.com/repos/huggingface/datasets/issues/6347/events
https://github.com/huggingface/datasets/issues/6347
1,959,004,835
I_kwDODunzps50xAqj
6,347
Incorrect example code in 'Create a dataset' docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4", "events_url": "https://api.github.com/users/rwood-97/events{/privacy}", "followers_url": "https://api.github.com/users/rwood-97/followers", "following_url": "https://api.github.com/users/rwood-97/following{/other_user}", "gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rwood-97", "id": 72076688, "login": "rwood-97", "node_id": "MDQ6VXNlcjcyMDc2Njg4", "organizations_url": "https://api.github.com/users/rwood-97/orgs", "received_events_url": "https://api.github.com/users/rwood-97/received_events", "repos_url": "https://api.github.com/users/rwood-97/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions", "type": "User", "url": "https://api.github.com/users/rwood-97" }
[]
closed
false
null
[]
null
[ "This was fixed in https://github.com/huggingface/datasets/pull/6247. You can find the fix in the `main` version of the docs", "Ah great, thanks :)" ]
"2023-10-24T11:01:21Z"
"2023-10-25T13:05:21Z"
"2023-10-25T13:05:21Z"
NONE
null
null
null
### Describe the bug On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect. Currently, examples are: ``` python from datasets import ImageFolder dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon") ``` and ``` python from datasets import AudioFolder dataset = load_dataset("audiofolder", data_dir="/path/to/folder") ``` I'm pretty sure the imports are wrong and should be: ``` python from datasets import load_dataset dataset = load_dataset("audiofolder", data_dir="/path/to/folder") ``` I am happy to update this if this is right but just wanted to check before making any changes. ### Steps to reproduce the bug Go to https://huggingface.co/docs/datasets/create_dataset ### Expected behavior N/A ### Environment info N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6347/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6347/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5333/comments
https://api.github.com/repos/huggingface/datasets/issues/5333/events
https://github.com/huggingface/datasets/pull/5333
1,476,890,156
PR_kwDODunzps5EXGQ2
5,333
fix: 🐛 pass the token to get the list of config names
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-12-05T16:06:09Z"
"2022-12-06T08:25:17Z"
"2022-12-06T08:22:49Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5333.diff", "html_url": "https://github.com/huggingface/datasets/pull/5333", "merged_at": "2022-12-06T08:22:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/5333.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5333" }
Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5333/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/866/comments
https://api.github.com/repos/huggingface/datasets/issues/866/events
https://github.com/huggingface/datasets/issues/866
745,719,222
MDU6SXNzdWU3NDU3MTkyMjI=
866
OSCAR from Inria group
{ "avatar_url": "https://avatars.githubusercontent.com/u/34098722?v=4", "events_url": "https://api.github.com/users/jchwenger/events{/privacy}", "followers_url": "https://api.github.com/users/jchwenger/followers", "following_url": "https://api.github.com/users/jchwenger/following{/other_user}", "gists_url": "https://api.github.com/users/jchwenger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jchwenger", "id": 34098722, "login": "jchwenger", "node_id": "MDQ6VXNlcjM0MDk4NzIy", "organizations_url": "https://api.github.com/users/jchwenger/orgs", "received_events_url": "https://api.github.com/users/jchwenger/received_events", "repos_url": "https://api.github.com/users/jchwenger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jchwenger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jchwenger/subscriptions", "type": "User", "url": "https://api.github.com/users/jchwenger" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though", "Grand, thanks for this!" ]
"2020-11-18T14:40:54Z"
"2020-11-18T15:01:30Z"
"2020-11-18T15:01:30Z"
NONE
null
null
null
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/866/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/866/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1292/comments
https://api.github.com/repos/huggingface/datasets/issues/1292/events
https://github.com/huggingface/datasets/pull/1292
759,354,627
MDExOlB1bGxSZXF1ZXN0NTM0Mzc0MzQ3
1,292
arXiv dataset added
{ "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanmoyio", "id": 33005287, "login": "tanmoyio", "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "repos_url": "https://api.github.com/users/tanmoyio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "type": "User", "url": "https://api.github.com/users/tanmoyio" }
[]
closed
false
null
[]
null
[]
"2020-12-08T11:08:28Z"
"2020-12-08T14:02:13Z"
"2020-12-08T14:02:13Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1292.diff", "html_url": "https://github.com/huggingface/datasets/pull/1292", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1292.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1292" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1292/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4476/comments
https://api.github.com/repos/huggingface/datasets/issues/4476/events
https://github.com/huggingface/datasets/issues/4476
1,267,987,499
I_kwDODunzps5Lk_Qr
4,476
`to_pandas` doesn't take into account format.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`", "Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.", "Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```", "Ahhhh Thank you!\r\n\r\nclosing then :)" ]
"2022-06-10T20:25:31Z"
"2022-06-15T17:41:41Z"
"2022-06-15T17:41:41Z"
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4476/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6320/comments
https://api.github.com/repos/huggingface/datasets/issues/6320/events
https://github.com/huggingface/datasets/issues/6320
1,952,618,316
I_kwDODunzps50YpdM
6,320
Dataset slice splits can't load training and validation at the same time
{ "avatar_url": "https://avatars.githubusercontent.com/u/32488097?v=4", "events_url": "https://api.github.com/users/timlac/events{/privacy}", "followers_url": "https://api.github.com/users/timlac/followers", "following_url": "https://api.github.com/users/timlac/following{/other_user}", "gists_url": "https://api.github.com/users/timlac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timlac", "id": 32488097, "login": "timlac", "node_id": "MDQ6VXNlcjMyNDg4MDk3", "organizations_url": "https://api.github.com/users/timlac/orgs", "received_events_url": "https://api.github.com/users/timlac/received_events", "repos_url": "https://api.github.com/users/timlac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timlac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timlac/subscriptions", "type": "User", "url": "https://api.github.com/users/timlac" }
[]
closed
false
null
[]
null
[ "The expression \"train+test\" concatenates the splits.\r\n\r\nThe individual splits as separate datasets can be obtained as follows:\r\n```python\r\ntrain_ds, test_ds = load_dataset(\"<dataset_name>\", split=[\"train\", \"test\"])\r\ntrain_10pct_ds, test_10pct_ds = load_dataset(\"<dataset_name>\", split=[\"train[:10%]\", \"test[:%10]\"])\r\n```" ]
"2023-10-19T16:09:22Z"
"2023-11-30T16:21:15Z"
"2023-11-30T16:21:15Z"
NONE
null
null
null
### Describe the bug According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command: `train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")` to load the train and test sets from the dataset. However executing the equivalent code: `speech_commands_v1 = load_dataset("superb", "ks", split="train+test")` only yields the following output: > Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 54175 > }) Where loading the dataset without the split argument yields: > DatasetDict({ > train: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 51094 > }) > validation: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 6798 > }) > test: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 3081 > }) > }) Thus, the API seems to be broken in this regard. This is a bit annoying since I want to be able to use the split argument with `split="train[:10%]+test[:10%]"` to have smaller dataset to work with when validating my model is working correctly. ### Steps to reproduce the bug `speech_commands_v1 = load_dataset("superb", "ks", split="train+test")` ### Expected behavior > DatasetDict({ > train: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 51094 > }) > test: Dataset({ > features: ['file', 'audio', 'label'], > num_rows: 3081 > }) > }) ### Environment info ``` import datasets print(datasets.__version__) ``` > 2.14.5 ``` import sys print(sys.version) ``` > 3.9.17 (main, Jul 5 2023, 20:41:20) > [GCC 11.2.0]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6320/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6320/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2820/comments
https://api.github.com/repos/huggingface/datasets/issues/2820/events
https://github.com/huggingface/datasets/issues/2820
975,210,712
MDU6SXNzdWU5NzUyMTA3MTI=
2,820
Downloading “reddit” dataset keeps timing out.
{ "avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4", "events_url": "https://api.github.com/users/smeyerhot/events{/privacy}", "followers_url": "https://api.github.com/users/smeyerhot/followers", "following_url": "https://api.github.com/users/smeyerhot/following{/other_user}", "gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/smeyerhot", "id": 43877130, "login": "smeyerhot", "node_id": "MDQ6VXNlcjQzODc3MTMw", "organizations_url": "https://api.github.com/users/smeyerhot/orgs", "received_events_url": "https://api.github.com/users/smeyerhot/received_events", "repos_url": "https://api.github.com/users/smeyerhot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions", "type": "User", "url": "https://api.github.com/users/smeyerhot" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to /Volumes/My Passport for Mac/og-chat-data/reddit/default/1.0.0/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...\r\nDownloading: 13%\r\n403M/3.14G [44:39<2:27:09, 310kB/s]\r\n---------------------------------------------------------------------------\r\ntimeout Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 437 try:\r\n--> 438 yield\r\n 439 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 518 cache_content = False\r\n--> 519 data = self._fp.read(amt) if not fp_closed else b\"\"\r\n 520 if (\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in read(self, amt)\r\n 458 b = bytearray(amt)\r\n--> 459 n = self.readinto(b)\r\n 460 return memoryview(b)[:n].tobytes()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/http/client.py in readinto(self, b)\r\n 502 # (for example, reading in 1k chunks)\r\n--> 503 n = self.fp.readinto(b)\r\n 504 if not n and b:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/socket.py in readinto(self, b)\r\n 703 try:\r\n--> 704 return self._sock.recv_into(b)\r\n 705 except timeout:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in recv_into(self, buffer, nbytes, flags)\r\n 1240 self.__class__)\r\n-> 1241 return self.read(nbytes, buffer)\r\n 1242 else:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/ssl.py in read(self, len, buffer)\r\n 1098 if buffer is not None:\r\n-> 1099 return self._sslobj.read(len, buffer)\r\n 1100 else:\r\n\r\ntimeout: The read operation timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadTimeoutError Traceback (most recent call last)\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 757 try:\r\n--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n 759 yield chunk\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in stream(self, amt, decode_content)\r\n 575 while not is_fp_closed(self._fp):\r\n--> 576 data = self.read(amt=amt, decode_content=decode_content)\r\n 577 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in read(self, amt, decode_content, cache_content)\r\n 540 # Content-Length are caught.\r\n--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n 542 \r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/contextlib.py in __exit__(self, type, value, traceback)\r\n 134 try:\r\n--> 135 self.gen.throw(type, value, traceback)\r\n 136 except StopIteration as exc:\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/urllib3/response.py in _error_catcher(self)\r\n 442 # there is yet no clean way to get at it from this context.\r\n--> 443 raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n 444 \r\n\r\nReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n/var/folders/3f/md0t9sgj6rz8xy01fskttqdc0000gn/T/ipykernel_89016/1133441872.py in <module>\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"/Volumes/My Passport for Mac/og-chat-data\")\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 669 split_dict = SplitDict(dataset_name=self.name)\r\n 670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 672 \r\n 673 # Checksums verification\r\n\r\n~/.cache/huggingface/modules/datasets_modules/datasets/reddit/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969/reddit.py in _split_generators(self, dl_manager)\r\n 73 def _split_generators(self, dl_manager):\r\n 74 \"\"\"Returns SplitGenerators.\"\"\"\r\n---> 75 dl_path = dl_manager.download_and_extract(_URL)\r\n 76 return [\r\n 77 datasets.SplitGenerator(\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 287 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 288 \"\"\"\r\n--> 289 return self.extract(self.download(url_or_urls))\r\n 290 \r\n 291 def get_recorded_sizes_checksums(self):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in download(self, url_or_urls)\r\n 195 \r\n 196 start_time = datetime.now()\r\n--> 197 downloaded_path_or_paths = map_nested(\r\n 198 download_func,\r\n 199 url_or_urls,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 194 # Singleton\r\n 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 196 return function(data_struct)\r\n 197 \r\n 198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/download_manager.py in _download(self, url_or_filename, download_config)\r\n 218 # append the relative path to the base_path\r\n 219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 220 return cached_path(url_or_filename, download_config=download_config)\r\n 221 \r\n 222 def iter_archive(self, path):\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 286 if is_remote_url(url_or_filename):\r\n 287 # URL, so get it from the cache (downloading if necessary)\r\n--> 288 output_path = get_from_cache(\r\n 289 url_or_filename,\r\n 290 cache_dir=cache_dir,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 643 ftp_get(url, temp_file)\r\n 644 else:\r\n--> 645 http_get(\r\n 646 url,\r\n 647 temp_file,\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 451 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 452 )\r\n--> 453 for chunk in response.iter_content(chunk_size=1024):\r\n 454 if chunk: # filter out keep-alive new chunks\r\n 455 progress.update(len(chunk))\r\n\r\n/usr/local/anaconda3/envs/og-data-env/lib/python3.9/site-packages/requests/models.py in generate()\r\n 763 raise ContentDecodingError(e)\r\n 764 except ReadTimeoutError as e:\r\n--> 765 raise ConnectionError(e)\r\n 766 else:\r\n 767 # Standard file-like object.\r\n\r\nConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n```", "Hey @lhoestq should I try to fix this issue ?", "It also doesn't seem to be \"smart caching\" and I received an error about a file not being found...", "To be clear, the error I get when I try to \"re-instantiate\" the download after failure is: \r\n```\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: <HOME>/.cache/huggingface/datasets/downloads/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json'\r\n```", "Here is a new error:\r\n```\r\nConnectionError: Couldn't reach https://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\n```", "Hi ! Since https://github.com/huggingface/datasets/pull/2803 we've changed the time out from 10sec to 100sec.\r\nThis should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\n\r\nWhen re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.\r\n\r\nFinally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again\r\n", "@lhoestq thanks for the update. The directory specified by the OSError ie. \r\n```\r\n1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c/corpus-webis-tldr-17.json \r\n```\r\n was not actually in that directory so I can't delete it. ", "Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?\r\nThis way the download manager will know that it has to uncompress the data again", "It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. ", "Great ! The timeout change will be available in the next release of `datasets` :)" ]
"2021-08-20T02:52:36Z"
"2021-09-08T14:52:02Z"
"2021-09-08T14:52:02Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2820/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2820/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1240/comments
https://api.github.com/repos/huggingface/datasets/issues/1240/events
https://github.com/huggingface/datasets/pull/1240
758,355,523
MDExOlB1bGxSZXF1ZXN0NTMzNTQxNjk5
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "can you also run `make style` to format the code ?", "I'll come back to this one in sometime :) @lhoestq ", "Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `", "> Also if you would use xml.etree.ElementTree to parse the XML it would be awesome, because right now you're using an external dependency xmltodict\r\n\r\nIts pseudo xml so elementtree fails. xmltodict seems to be working quite good for this. do we have examples of pseudo xml datasets?", "for the other pseudo xml the text is parsed manually", "Can you add `xmltodict` to the test dependencies in setup.py please to fix the CI please ?", "Also can you add the dataset card with the tags and run `make style` ?", "Hi :) have you had a chance to fix the test dependency and apply `make style` ?\r\n\r\nFeel fee to ping me when it's ready for a review", "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
"2020-12-07T09:57:15Z"
"2023-09-24T09:40:59Z"
"2022-10-03T09:39:43Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/1240.diff", "html_url": "https://github.com/huggingface/datasets/pull/1240", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1240.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1240" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1240/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1240/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/418/comments
https://api.github.com/repos/huggingface/datasets/issues/418/events
https://github.com/huggingface/datasets/issues/418
661,914,873
MDU6SXNzdWU2NjE5MTQ4NzM=
418
Addition of google drive links to dl_manager
{ "avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4", "events_url": "https://api.github.com/users/lordtt13/events{/privacy}", "followers_url": "https://api.github.com/users/lordtt13/followers", "following_url": "https://api.github.com/users/lordtt13/following{/other_user}", "gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lordtt13", "id": 35500534, "login": "lordtt13", "node_id": "MDQ6VXNlcjM1NTAwNTM0", "organizations_url": "https://api.github.com/users/lordtt13/orgs", "received_events_url": "https://api.github.com/users/lordtt13/received_events", "repos_url": "https://api.github.com/users/lordtt13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions", "type": "User", "url": "https://api.github.com/users/lordtt13" }
[]
closed
false
null
[]
null
[ "I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ", "Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.\r\n", "Yes it worked, thank you!" ]
"2020-07-20T14:52:02Z"
"2020-07-20T15:39:32Z"
"2020-07-20T15:39:32Z"
CONTRIBUTOR
null
null
null
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig for SQUAD.""" def __init__(self, **kwargs): """BuilderConfig for EmoContext. Args: **kwargs: keyword arguments forwarded to super. """ super(EmoConfig, self).__init__(**kwargs) _TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing" _TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing" class EmoDataset(nlp.GeneratorBasedBuilder): """ SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """ VERSION = nlp.Version("1.0.0") force = False def _info(self): return nlp.DatasetInfo( description=_DESCRIPTION, features=nlp.Features( { "text": nlp.Value("string"), "label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]), } ), supervised_keys=None, homepage="https://www.aclweb.org/anthology/S19-2005/", citation=_CITATION, ) def _get_drive_url(self, url): base_url = 'https://drive.google.com/uc?id=' split_url = url.split('/') return base_url + split_url[5] def _split_generators(self, dl_manager): """Returns SplitGenerators.""" if(not os.path.exists("emo-train.json") or self.force): gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True) if(not os.path.exists("emo-test.json") or self.force): gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True) return [ nlp.SplitGenerator( name=nlp.Split.TRAIN, gen_kwargs={ "filepath": "emo-train.json", "split": "train", }, ), nlp.SplitGenerator( name=nlp.Split.TEST, gen_kwargs={"filepath": "emo-test.json", "split": "test"}, ), ] def _generate_examples(self, filepath, split): """ Yields examples. """ with open(filepath, 'rb') as f: data = json.load(f) for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()): yield id_, { "text": text, "label": label, } ``` Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/418/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5041/comments
https://api.github.com/repos/huggingface/datasets/issues/5041/events
https://github.com/huggingface/datasets/pull/5041
1,390,722,230
PR_kwDODunzps4_2WES
5,041
Support streaming hendrycks_test dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-09-29T11:37:58Z"
"2022-09-30T07:13:38Z"
"2022-09-29T12:07:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5041.diff", "html_url": "https://github.com/huggingface/datasets/pull/5041", "merged_at": "2022-09-29T12:07:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5041.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5041" }
This PR: - supports streaming - fixes the description section of the dataset card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5041/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5041/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4700/comments
https://api.github.com/repos/huggingface/datasets/issues/4700/events
https://github.com/huggingface/datasets/pull/4700
1,307,599,161
PR_kwDODunzps47jKNx
4,700
Support extract lz4 compressed data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-07-18T08:41:31Z"
"2022-07-18T14:43:59Z"
"2022-07-18T14:31:47Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4700.diff", "html_url": "https://github.com/huggingface/datasets/pull/4700", "merged_at": "2022-07-18T14:31:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4700.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4700" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4700/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3994/comments
https://api.github.com/repos/huggingface/datasets/issues/3994/events
https://github.com/huggingface/datasets/pull/3994
1,178,211,138
PR_kwDODunzps404wWu
3,994
Change audio column from string path to Audio feature in ASR task
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[]
"2022-03-23T14:34:52Z"
"2022-03-23T15:43:43Z"
"2022-03-23T15:43:43Z"
CONTRIBUTOR
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/3994.diff", "html_url": "https://github.com/huggingface/datasets/pull/3994", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3994.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3994" }
Will fix #3990
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3994/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1553/comments
https://api.github.com/repos/huggingface/datasets/issues/1553/events
https://github.com/huggingface/datasets/pull/1553
765,670,083
MDExOlB1bGxSZXF1ZXN0NTM5MDI4MzM3
1,553
added air_dialogue
{ "avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4", "events_url": "https://api.github.com/users/skyprince999/events{/privacy}", "followers_url": "https://api.github.com/users/skyprince999/followers", "following_url": "https://api.github.com/users/skyprince999/following{/other_user}", "gists_url": "https://api.github.com/users/skyprince999/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/skyprince999", "id": 9033954, "login": "skyprince999", "node_id": "MDQ6VXNlcjkwMzM5NTQ=", "organizations_url": "https://api.github.com/users/skyprince999/orgs", "received_events_url": "https://api.github.com/users/skyprince999/received_events", "repos_url": "https://api.github.com/users/skyprince999/repos", "site_admin": false, "starred_url": "https://api.github.com/users/skyprince999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skyprince999/subscriptions", "type": "User", "url": "https://api.github.com/users/skyprince999" }
[]
closed
false
null
[]
null
[]
"2020-12-13T21:59:02Z"
"2020-12-23T11:20:40Z"
"2020-12-23T11:20:39Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1553.diff", "html_url": "https://github.com/huggingface/datasets/pull/1553", "merged_at": "2020-12-23T11:20:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1553" }
UPDATE2 (3797ce5): Updated for multi-configs UPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin/master DRAFT VERSION (57fdb20): (_no longer draft_) Uploaded the air_dialogue database. dummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1553/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3152/comments
https://api.github.com/repos/huggingface/datasets/issues/3152/events
https://github.com/huggingface/datasets/pull/3152
1,034,039,379
PR_kwDODunzps4tkqi-
3,152
Fix some typos in the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/h4iku", "id": 3812788, "login": "h4iku", "node_id": "MDQ6VXNlcjM4MTI3ODg=", "organizations_url": "https://api.github.com/users/h4iku/orgs", "received_events_url": "https://api.github.com/users/h4iku/received_events", "repos_url": "https://api.github.com/users/h4iku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "type": "User", "url": "https://api.github.com/users/h4iku" }
[]
closed
false
null
[]
null
[]
"2021-10-23T01:38:35Z"
"2021-10-25T14:27:36Z"
"2021-10-25T14:03:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3152.diff", "html_url": "https://github.com/huggingface/datasets/pull/3152", "merged_at": "2021-10-25T14:03:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/3152.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3152" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3152/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3152/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6487/comments
https://api.github.com/repos/huggingface/datasets/issues/6487/events
https://github.com/huggingface/datasets/pull/6487
2,035,424,254
PR_kwDODunzps5hqyfV
6,487
Update builder hash with info
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Closing this one in favor of https://github.com/huggingface/datasets/pull/6458/commits/565c294fc12bc547730a023a610ed4f92313d8fb in https://github.com/huggingface/datasets/pull/6458" ]
"2023-12-11T11:09:16Z"
"2023-12-11T11:41:34Z"
"2023-12-11T11:41:34Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6487.diff", "html_url": "https://github.com/huggingface/datasets/pull/6487", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6487" }
Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change. This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub) Ideally we should take the resolved files into account as well but this will be for another PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6487/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5794/comments
https://api.github.com/repos/huggingface/datasets/issues/5794/events
https://github.com/huggingface/datasets/issues/5794
1,685,196,061
I_kwDODunzps5kcg0d
5,794
CI ZeroDivisionError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
"2023-04-26T14:55:23Z"
"2023-04-26T14:55:23Z"
null
MEMBER
null
null
null
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688 ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1 def speed_metrics(split, start_time, num_samples=None, num_steps=None): """ Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed """ runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: > samples_per_second = num_samples / runtime E ZeroDivisionError: float division by zero C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5794/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/200/comments
https://api.github.com/repos/huggingface/datasets/issues/200/events
https://github.com/huggingface/datasets/pull/200
625,226,638
MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0
200
[ArrowWriter] Set schema at first write example
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?" ]
"2020-05-26T21:59:48Z"
"2020-05-27T09:07:54Z"
"2020-05-27T09:07:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/200.diff", "html_url": "https://github.com/huggingface/datasets/pull/200", "merged_at": "2020-05-27T09:07:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/200.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/200" }
Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so). I noticed that it was not done if the first example is added via `.write`, so I added it for coherence.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/200/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/200/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2927/comments
https://api.github.com/repos/huggingface/datasets/issues/2927/events
https://github.com/huggingface/datasets/issues/2927
997,654,680
I_kwDODunzps47dwCY
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Thanks for reporting, I'm looking into it :)", "Fixed by #2950." ]
"2021-09-16T01:14:02Z"
"2021-09-20T16:23:22Z"
"2021-09-20T16:23:21Z"
NONE
null
null
null
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2927/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/808/comments
https://api.github.com/repos/huggingface/datasets/issues/808/events
https://github.com/huggingface/datasets/pull/808
737,638,942
MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0
808
dataset(dgs): initial dataset loading script
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
[]
closed
false
null
[]
null
[ "Hi @AmitMY, \r\n\r\nWere you able to figure this out?", "I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as I don't know how to support this PR further" ]
"2020-11-06T10:14:43Z"
"2021-03-23T06:18:55Z"
"2021-03-23T06:18:55Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/808.diff", "html_url": "https://github.com/huggingface/datasets/pull/808", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/808.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/808" }
When trying to create dummy data I get: > Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data. I am not sure how to manually create the dummy_data (what exactly it should contain) Also note, this library says: > ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance' When you actually need to `pip install pympi-ling`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/808/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5439/comments
https://api.github.com/repos/huggingface/datasets/issues/5439/events
https://github.com/huggingface/datasets/issues/5439
1,537,973,564
I_kwDODunzps5bq508
5,439
[dataset request] Add Common Voice 12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4", "events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}", "followers_url": "https://api.github.com/users/MohammedRakib/followers", "following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}", "gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MohammedRakib", "id": 31034499, "login": "MohammedRakib", "node_id": "MDQ6VXNlcjMxMDM0NDk5", "organizations_url": "https://api.github.com/users/MohammedRakib/orgs", "received_events_url": "https://api.github.com/users/MohammedRakib/received_events", "repos_url": "https://api.github.com/users/MohammedRakib/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions", "type": "User", "url": "https://api.github.com/users/MohammedRakib" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
null
[ "@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?", "This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0" ]
"2023-01-18T13:07:05Z"
"2023-07-21T14:26:10Z"
"2023-07-21T14:26:09Z"
NONE
null
null
null
### Feature request Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets. ### Motivation The dataset link: https://commonvoice.mozilla.org/en/datasets
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5439/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4544/comments
https://api.github.com/repos/huggingface/datasets/issues/4544/events
https://github.com/huggingface/datasets/issues/4544
1,280,500,340
I_kwDODunzps5MUuJ0
4,544
[CI] seqeval installation fails sometimes on python 3.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2022-06-22T16:35:23Z"
"2022-06-23T10:13:44Z"
"2022-06-23T10:13:44Z"
MEMBER
null
null
null
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail. The installation fails because of this error: ``` Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |███████▌ | 10 kB 42.1 MB/s eta 0:00:01 |███████████████ | 20 kB 53.3 MB/s eta 0:00:01 |██████████████████████▌ | 30 kB 67.2 MB/s eta 0:00:01 |██████████████████████████████ | 40 kB 76.1 MB/s eta 0:00:01 |████████████████████████████████| 43 kB 10.0 MB/s Preparing metadata (setup.py) ... - error ERROR: Command errored out with exit status 1: command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/ Complete output (22 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module> 'Programming Language :: Python :: Implementation :: PyPy' File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__ k: v for k, v in attrs.items() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__ self.finalize_options() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options ep.load()(self, ep.name, value) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load return self.resolve() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300 Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT This could be caused by the latest updates of setuptools-scm
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4544/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5467/comments
https://api.github.com/repos/huggingface/datasets/issues/5467/events
https://github.com/huggingface/datasets/pull/5467
1,557,898,273
PR_kwDODunzps5IlAlk
5,467
Fix conda command in readme
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "ah didn't read well - it's all good", "or maybe it isn't ? `-c huggingface -c conda-forge` installs from HF or from conda-forge ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010196 / 0.011353 (-0.001157) | 0.005531 / 0.011008 (-0.005477) | 0.104601 / 0.038508 (0.066093) | 0.041322 / 0.023109 (0.018213) | 0.302080 / 0.275898 (0.026182) | 0.396579 / 0.323480 (0.073099) | 0.008874 / 0.007986 (0.000888) | 0.004482 / 0.004328 (0.000153) | 0.077487 / 0.004250 (0.073236) | 0.051113 / 0.037052 (0.014061) | 0.321850 / 0.258489 (0.063361) | 0.354946 / 0.293841 (0.061105) | 0.039822 / 0.128546 (-0.088724) | 0.012622 / 0.075646 (-0.063024) | 0.337898 / 0.419271 (-0.081374) | 0.048372 / 0.043533 (0.004839) | 0.299646 / 0.255139 (0.044507) | 0.321113 / 0.283200 (0.037914) | 0.114780 / 0.141683 (-0.026903) | 1.475750 / 1.452155 (0.023595) | 1.496307 / 1.492716 (0.003590) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.311443 / 0.018006 (0.293437) | 0.567268 / 0.000490 (0.566778) | 0.006149 / 0.000200 (0.005950) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029407 / 0.037411 (-0.008004) | 0.118611 / 0.014526 (0.104085) | 0.122247 / 0.176557 (-0.054309) | 0.164770 / 0.737135 (-0.572365) | 0.128561 / 0.296338 (-0.167778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399185 / 0.215209 (0.183976) | 3.972995 / 2.077655 (1.895340) | 1.764638 / 1.504120 (0.260518) | 1.574058 / 1.541195 (0.032863) | 1.741695 / 1.468490 (0.273205) | 0.705664 / 4.584777 (-3.879113) | 3.915399 / 3.745712 (0.169686) | 2.310154 / 5.269862 (-2.959707) | 1.554067 / 4.565676 (-3.011610) | 0.087133 / 0.424275 (-0.337142) | 0.012393 / 0.007607 (0.004786) | 0.510758 / 0.226044 (0.284713) | 5.114906 / 2.268929 (2.845977) | 2.304473 / 55.444624 (-53.140152) | 1.960768 / 6.876477 (-4.915709) | 2.092263 / 2.142072 (-0.049810) | 0.867973 / 4.805227 (-3.937255) | 0.170000 / 6.500664 (-6.330664) | 0.068358 / 0.075469 (-0.007111) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211022 / 1.841788 (-0.630765) | 16.777269 / 8.074308 (8.702961) | 15.272659 / 10.191392 (5.081267) | 0.182149 / 0.680424 (-0.498274) | 0.029577 / 0.534201 (-0.504624) | 0.446590 / 0.579283 (-0.132693) | 0.454724 / 0.434364 (0.020360) | 0.541938 / 0.540337 (0.001601) | 0.640886 / 1.386936 (-0.746050) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008441 / 0.011353 (-0.002912) | 0.006105 / 0.011008 (-0.004904) | 0.100349 / 0.038508 (0.061841) | 0.040675 / 0.023109 (0.017565) | 0.381775 / 0.275898 (0.105877) | 0.425246 / 0.323480 (0.101767) | 0.007197 / 0.007986 (-0.000789) | 0.004972 / 0.004328 (0.000644) | 0.075346 / 0.004250 (0.071096) | 0.065339 / 0.037052 (0.028286) | 0.379340 / 0.258489 (0.120851) | 0.435646 / 0.293841 (0.141805) | 0.038891 / 0.128546 (-0.089656) | 0.013079 / 0.075646 (-0.062568) | 0.339273 / 0.419271 (-0.079999) | 0.057478 / 0.043533 (0.013945) | 0.373516 / 0.255139 (0.118377) | 0.402388 / 0.283200 (0.119189) | 0.123145 / 0.141683 (-0.018538) | 1.503765 / 1.452155 (0.051610) | 1.609797 / 1.492716 (0.117081) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.420354 / 0.018006 (0.402348) | 0.589272 / 0.000490 (0.588782) | 0.045861 / 0.000200 (0.045662) | 0.000527 / 0.000054 (0.000473) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033918 / 0.037411 (-0.003493) | 0.128041 / 0.014526 (0.113515) | 0.130274 / 0.176557 (-0.046283) | 0.180605 / 0.737135 (-0.556530) | 0.136377 / 0.296338 (-0.159962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440343 / 0.215209 (0.225133) | 4.390264 / 2.077655 (2.312610) | 2.218738 / 1.504120 (0.714618) | 2.052399 / 1.541195 (0.511204) | 2.231912 / 1.468490 (0.763422) | 0.716805 / 4.584777 (-3.867972) | 3.909277 / 3.745712 (0.163565) | 2.302121 / 5.269862 (-2.967740) | 1.419454 / 4.565676 (-3.146222) | 0.088067 / 0.424275 (-0.336208) | 0.012994 / 0.007607 (0.005387) | 0.548267 / 0.226044 (0.322223) | 5.462973 / 2.268929 (3.194044) | 2.768414 / 55.444624 (-52.676210) | 2.489320 / 6.876477 (-4.387157) | 2.569546 / 2.142072 (0.427474) | 0.853135 / 4.805227 (-3.952092) | 0.170618 / 6.500664 (-6.330046) | 0.069908 / 0.075469 (-0.005562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304726 / 1.841788 (-0.537062) | 17.335977 / 8.074308 (9.261669) | 15.088319 / 10.191392 (4.896927) | 0.190893 / 0.680424 (-0.489531) | 0.018133 / 0.534201 (-0.516068) | 0.429324 / 0.579283 (-0.149959) | 0.439212 / 0.434364 (0.004848) | 0.545312 / 0.540337 (0.004975) | 0.663972 / 1.386936 (-0.722964) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7505adc37498f5e0cb3dd4c13bbb06696afdda5 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2023-01-26T10:03:01Z"
"2023-09-24T10:06:59Z"
"2023-01-26T18:29:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5467.diff", "html_url": "https://github.com/huggingface/datasets/pull/5467", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5467.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5467" }
The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining ``` conda install -c huggingface datasets ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5467/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6132/comments
https://api.github.com/repos/huggingface/datasets/issues/6132/events
https://github.com/huggingface/datasets/issues/6132
1,843,491,020
I_kwDODunzps5t4XDM
6,132
to_iterable_dataset is missing in document
{ "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/npuichigo", "id": 11533479, "login": "npuichigo", "node_id": "MDQ6VXNlcjExNTMzNDc5", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "repos_url": "https://api.github.com/users/npuichigo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "type": "User", "url": "https://api.github.com/users/npuichigo" }
[]
closed
false
null
[]
null
[ "Fixed with PR" ]
"2023-08-09T15:15:03Z"
"2023-08-16T04:43:36Z"
"2023-08-16T04:43:29Z"
CONTRIBUTOR
null
null
null
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6132/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1290/comments
https://api.github.com/repos/huggingface/datasets/issues/1290/events
https://github.com/huggingface/datasets/issues/1290
759,339,989
MDU6SXNzdWU3NTkzMzk5ODk=
1,290
imdb dataset cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[]
closed
false
null
[]
null
[ "Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?", "Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nDownloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```", "resolved with moving to version 1.1.3" ]
"2020-12-08T10:47:36Z"
"2020-12-24T17:38:09Z"
"2020-12-24T17:38:09Z"
CONTRIBUTOR
null
null
null
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1290/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2904/comments
https://api.github.com/repos/huggingface/datasets/issues/2904/events
https://github.com/huggingface/datasets/issues/2904
995,814,222
I_kwDODunzps47WutO
2,904
FORCE_REDOWNLOAD does not work
{ "avatar_url": "https://avatars.githubusercontent.com/u/5278299?v=4", "events_url": "https://api.github.com/users/anoopkatti/events{/privacy}", "followers_url": "https://api.github.com/users/anoopkatti/followers", "following_url": "https://api.github.com/users/anoopkatti/following{/other_user}", "gists_url": "https://api.github.com/users/anoopkatti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anoopkatti", "id": 5278299, "login": "anoopkatti", "node_id": "MDQ6VXNlcjUyNzgyOTk=", "organizations_url": "https://api.github.com/users/anoopkatti/orgs", "received_events_url": "https://api.github.com/users/anoopkatti/received_events", "repos_url": "https://api.github.com/users/anoopkatti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anoopkatti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anoopkatti/subscriptions", "type": "User", "url": "https://api.github.com/users/anoopkatti" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue.", "Facing the same issue, is there any way to overtake this issue until it will be fixed? ", "You can clear your extraction cache in the meantime (by default at `~/.cache/huggingface/datasets/downloads/extracted`)" ]
"2021-09-14T09:45:26Z"
"2021-10-06T09:37:19Z"
null
NONE
null
null
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2904/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2904/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4891/comments
https://api.github.com/repos/huggingface/datasets/issues/4891/events
https://github.com/huggingface/datasets/pull/4891
1,350,589,813
PR_kwDODunzps49x382
4,891
Fix missing tags in dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2022-08-25T09:14:17Z"
"2022-09-22T14:39:02Z"
"2022-08-25T13:43:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4891.diff", "html_url": "https://github.com/huggingface/datasets/pull/4891", "merged_at": "2022-08-25T13:43:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4891.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4891" }
Fix missing tags in dataset cards: - aslg_pc12 - librispeech_lm - mwsc - opus100 - qasc - quail - squadshifts - winograd_wsc This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4891/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
https://api.github.com/repos/huggingface/datasets/issues/1888/events
https://github.com/huggingface/datasets/pull/1888
809,241,123
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
1,888
Docs for adding new column on formatted dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Close #1872" ]
"2021-02-16T11:45:00Z"
"2021-03-30T14:01:03Z"
"2021-02-16T11:58:57Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "html_url": "https://github.com/huggingface/datasets/pull/1888", "merged_at": "2021-02-16T11:58:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888" }
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1697/comments
https://api.github.com/repos/huggingface/datasets/issues/1697/events
https://github.com/huggingface/datasets/pull/1697
781,126,579
MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5
1,697
Update DialogRE DatasetCard
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[ "Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?" ]
"2021-01-07T08:22:33Z"
"2021-01-07T13:34:28Z"
"2021-01-07T13:34:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1697.diff", "html_url": "https://github.com/huggingface/datasets/pull/1697", "merged_at": "2021-01-07T13:34:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/1697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1697" }
Update the information in the dataset card for the Dialog RE dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1697/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6490/comments
https://api.github.com/repos/huggingface/datasets/issues/6490/events
https://github.com/huggingface/datasets/issues/6490
2,037,204,892
I_kwDODunzps55bUec
6,490
`load_dataset(...,save_infos=True)` not working without loading script
{ "avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4", "events_url": "https://api.github.com/users/morganveyret/events{/privacy}", "followers_url": "https://api.github.com/users/morganveyret/followers", "following_url": "https://api.github.com/users/morganveyret/following{/other_user}", "gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/morganveyret", "id": 114978051, "login": "morganveyret", "node_id": "U_kgDOBtptAw", "organizations_url": "https://api.github.com/users/morganveyret/orgs", "received_events_url": "https://api.github.com/users/morganveyret/received_events", "repos_url": "https://api.github.com/users/morganveyret/repos", "site_admin": false, "starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions", "type": "User", "url": "https://api.github.com/users/morganveyret" }
[]
open
false
null
[]
null
[ "Also, once the README.md exists in the python environment it is used when loading another dataset in the same format (e.g. json) since it always resolves the path to the same directory.\r\nThe consequence here is any other dataset won't load because of infos mismatch.\r\nTo reproduce this aspect:\r\n1. Do a `load_datasets(...,save_infos=True)` with one dataset without a loading script\r\n2. Try to load another dataset without a loading script in the same format (e.g. json) but with a different schema " ]
"2023-12-12T08:09:18Z"
"2023-12-12T08:36:22Z"
null
NONE
null
null
null
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`). ### Steps to reproduce the bug 1. Have a local dataset without any loading script 2. Make sure there are no dataset infos in the README.md 3. Load with `save_infos=True` 4. No change in the dataset README.md 5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`) ### Expected behavior The dataset README.md should be updated and no file should be created in the python environment. ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6490/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2761/comments
https://api.github.com/repos/huggingface/datasets/issues/2761/events
https://github.com/huggingface/datasets/issues/2761
961,568,287
MDU6SXNzdWU5NjE1NjgyODc=
2,761
Error loading C4 realnewslike dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32061512?v=4", "events_url": "https://api.github.com/users/danshirron/events{/privacy}", "followers_url": "https://api.github.com/users/danshirron/followers", "following_url": "https://api.github.com/users/danshirron/following{/other_user}", "gists_url": "https://api.github.com/users/danshirron/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danshirron", "id": 32061512, "login": "danshirron", "node_id": "MDQ6VXNlcjMyMDYxNTEy", "organizations_url": "https://api.github.com/users/danshirron/orgs", "received_events_url": "https://api.github.com/users/danshirron/received_events", "repos_url": "https://api.github.com/users/danshirron/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danshirron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danshirron/subscriptions", "type": "User", "url": "https://api.github.com/users/danshirron" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.", "@bhavitvyamalik @lhoestq , just tried the above and got:\r\n>>> a=datasets.load_dataset('c4','en.realnewslike')\r\nDownloading: 3.29kB [00:00, 1.66MB/s] \r\nDownloading: 2.40MB [00:00, 12.6MB/s] \r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 819, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py\", line 701, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1049, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 268, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py\", line 360, in _create_builder_config\r\n raise ValueError(\r\nValueError: BuilderConfig en.realnewslike not found. Available: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']\r\n>>> \r\n\r\ndatasets version is 1.11.0\r\n", "I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`. \r\n\r\nI tried `raw_datasets = load_dataset('c4', 'realnewslike')` and the download started. Make sure you don't have any old copy of this dataset and you download it fresh using the latest version of datasets. Sorry for the mix up!", "It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks" ]
"2021-08-05T08:16:58Z"
"2021-08-08T19:44:34Z"
"2021-08-08T19:44:34Z"
NONE
null
null
null
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2761/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2761/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3839/comments
https://api.github.com/repos/huggingface/datasets/issues/3839/events
https://github.com/huggingface/datasets/issues/3839
1,161,183,482
I_kwDODunzps5FNkD6
3,839
CI is broken for Windows
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
"2022-03-07T10:06:42Z"
"2022-05-20T14:13:43Z"
"2022-03-07T10:07:24Z"
MEMBER
null
null
null
## Describe the bug See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355 ``` ___________________ test_datasetdict_from_text_split[test] ____________________ [gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe split = 'test' text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt' tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7') @pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"]) def test_datasetdict_from_text_split(split, text_path, tmp_path): if split: path = {split: text_path} else: split = "train" path = {"train": text_path, "test": text_path} cache_dir = tmp_path / "cache" expected_features = {"text": "string"} > dataset = TextDatasetReader(path, cache_dir=cache_dir).read() tests\io\test_text.py:118: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read use_auth_token=use_auth_token, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare self._download_prepared_from_hf_gcs(dl_manager.download_config) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs reader.download_from_hf_gcs(download_config, relative_data_dir) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/")) C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path download_desc=download_config.download_desc, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache headers=headers, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head max_retries=max_retries, C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request return session.request(method=method, url=url, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request resp = self.send(prep, **send_kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send r = adapter.send(request, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send return self._on_request(adapter, request, *a, **kwargs) C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request match, match_failed_reasons = self._find_match(request) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <responses.RequestsMock object at 0x000002048AD70588> request = <PreparedRequest [HEAD]> def _find_first_match(self, request): match_failed_reasons = [] > for i, match in enumerate(self._matches): E AttributeError: 'RequestsMock' object has no attribute '_matches' C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3839/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3839/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/734/comments
https://api.github.com/repos/huggingface/datasets/issues/734/events
https://github.com/huggingface/datasets/pull/734
721,767,848
MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz
734
Fix GLUE metric description
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[]
"2020-10-14T20:44:14Z"
"2020-10-15T09:27:43Z"
"2020-10-15T09:27:42Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/734.diff", "html_url": "https://github.com/huggingface/datasets/pull/734", "merged_at": "2020-10-15T09:27:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/734" }
Small typo: the description says translation instead of prediction.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/734/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1901/comments
https://api.github.com/repos/huggingface/datasets/issues/1901/events
https://github.com/huggingface/datasets/pull/1901
810,845,605
MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy
1,901
Fix OPUS dataset download errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4", "events_url": "https://api.github.com/users/YangWang92/events{/privacy}", "followers_url": "https://api.github.com/users/YangWang92/followers", "following_url": "https://api.github.com/users/YangWang92/following{/other_user}", "gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YangWang92", "id": 3883941, "login": "YangWang92", "node_id": "MDQ6VXNlcjM4ODM5NDE=", "organizations_url": "https://api.github.com/users/YangWang92/orgs", "received_events_url": "https://api.github.com/users/YangWang92/received_events", "repos_url": "https://api.github.com/users/YangWang92/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions", "type": "User", "url": "https://api.github.com/users/YangWang92" }
[]
closed
false
null
[]
null
[]
"2021-02-18T07:39:41Z"
"2021-02-18T15:07:20Z"
"2021-02-18T09:39:21Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "html_url": "https://github.com/huggingface/datasets/pull/1901", "merged_at": "2021-02-18T09:39:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901" }
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1901/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4123/comments
https://api.github.com/repos/huggingface/datasets/issues/4123/events
https://github.com/huggingface/datasets/issues/4123
1,196,367,512
I_kwDODunzps5HTx6Y
4,123
Building C4 takes forever
{ "avatar_url": "https://avatars.githubusercontent.com/u/15899312?v=4", "events_url": "https://api.github.com/users/StellaAthena/events{/privacy}", "followers_url": "https://api.github.com/users/StellaAthena/followers", "following_url": "https://api.github.com/users/StellaAthena/following{/other_user}", "gists_url": "https://api.github.com/users/StellaAthena/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StellaAthena", "id": 15899312, "login": "StellaAthena", "node_id": "MDQ6VXNlcjE1ODk5MzEy", "organizations_url": "https://api.github.com/users/StellaAthena/orgs", "received_events_url": "https://api.github.com/users/StellaAthena/received_events", "repos_url": "https://api.github.com/users/StellaAthena/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StellaAthena/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StellaAthena/subscriptions", "type": "User", "url": "https://api.github.com/users/StellaAthena" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @StellaAthena, thanks for reporting.\r\n\r\nPlease note, that our `datasets` library performs several operations in order to load a dataset, among them:\r\n- it downloads all the required files: for C4 \"en\", 378.69 GB of JSON GZIPped files\r\n- it parses their content to generate the dataset\r\n- it caches the dataset in an Arrow file: for C4 \"en\", this file size is 1.87 TB\r\n- it memory-maps the Arrow file\r\n\r\nIf it suits your use case, you might load this dataset in streaming mode:\r\n- no Arrow file is generated\r\n- you can iterate over elements immediately (no need to wait to download all the entire files)\r\n\r\n```python\r\nIn [45]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"c4\", \"en\", split=\"train\", streaming=True)\r\n ...: for item in ds:\r\n ...: print(item)\r\n ...: break\r\n ...: \r\n{'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z', 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/'}\r\n```\r\nI hope this is useful for your use case." ]
"2022-04-07T17:41:30Z"
"2023-06-26T22:01:29Z"
"2023-06-26T22:01:29Z"
NONE
null
null
null
## Describe the bug C4-en is a 300 GB dataset. However, when I try to download it through the hub it takes over _six hours_ to generate the train/test split from the downloaded files. This is an absurd amount of time and an unnecessary waste of resources. ## Steps to reproduce the bug ```python c4 = datasets.load("c4", "en") ``` ## Expected results I would like to be able to download pre-split data. ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4123/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4123/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
https://api.github.com/repos/huggingface/datasets/issues/6129/events
https://github.com/huggingface/datasets/pull/6129
1,841,563,517
PR_kwDODunzps5Xcmuw
6,129
Release 2.14.4
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887bef1217e0f4441d57bf0f4d1e806df12f2c50 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n" ]
"2023-08-08T15:43:56Z"
"2023-08-08T16:08:22Z"
"2023-08-08T15:49:06Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6129.diff", "html_url": "https://github.com/huggingface/datasets/pull/6129", "merged_at": "2023-08-08T15:49:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/6129.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6129" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4387/comments
https://api.github.com/repos/huggingface/datasets/issues/4387/events
https://github.com/huggingface/datasets/issues/4387
1,244,147,817
I_kwDODunzps5KKDBp
4,387
device/google/accessory/adk2012 - Git at Google
{ "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aeckard45", "id": 87345839, "login": "Aeckard45", "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "repos_url": "https://api.github.com/users/Aeckard45/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "type": "User", "url": "https://api.github.com/users/Aeckard45" }
[]
closed
false
null
[]
null
[]
"2022-05-22T04:57:19Z"
"2022-05-23T06:36:27Z"
"2022-05-23T06:36:27Z"
NONE
null
null
null
"git clone https://android.googlesource.com/device/google/accessory/adk2012" https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4387/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/764/comments
https://api.github.com/repos/huggingface/datasets/issues/764/events
https://github.com/huggingface/datasets/pull/764
730,617,828
MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2
764
Adding Issue Template for Dataset Requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
[]
"2020-10-27T16:37:08Z"
"2020-10-27T17:25:26Z"
"2020-10-27T17:25:25Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/764.diff", "html_url": "https://github.com/huggingface/datasets/pull/764", "merged_at": "2020-10-27T17:25:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/764" }
adding .github/ISSUE_TEMPLATE/add-dataset.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/764/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/696/comments
https://api.github.com/repos/huggingface/datasets/issues/696/events
https://github.com/huggingface/datasets/pull/696
712,942,977
MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy
696
Elasticsearch index docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-10-01T15:18:58Z"
"2020-10-02T07:48:19Z"
"2020-10-02T07:48:18Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "html_url": "https://github.com/huggingface/datasets/pull/696", "merged_at": "2020-10-02T07:48:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/696" }
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/696/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/26
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/26/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/26/comments
https://api.github.com/repos/huggingface/datasets/issues/26/events
https://github.com/huggingface/datasets/pull/26
610,226,047
MDExOlB1bGxSZXF1ZXN0NDExNzA2NjA2
26
[Tests] Clean tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-04-30T16:38:29Z"
"2020-04-30T20:12:04Z"
"2020-04-30T20:12:03Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/26.diff", "html_url": "https://github.com/huggingface/datasets/pull/26", "merged_at": "2020-04-30T20:12:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/26.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/26" }
the abseil testing library (https://abseil.io/docs/python/quickstart.html) is better than the one I had before, so I decided to switch to that and changed the `setup.py` config file. Abseil has more support and a cleaner API for parametrized testing I think. I added a list of all dataset scripts that are currently on AWS, but will replace that once the API is integrated into this lib. One can now easily test for just a single function for a single dataset with: `tests/test_dataset_common.py::DatasetTest::test_load_dataset_wikipedia` NOTE: This PR is rebased on PR #29 so should be merged after.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/26/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/26/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3639/comments
https://api.github.com/repos/huggingface/datasets/issues/3639/events
https://github.com/huggingface/datasets/issues/3639
1,116,021,420
I_kwDODunzps5ChSKs
3,639
same value of precision, recall, f1 score at each epoch for classification task.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10828657?v=4", "events_url": "https://api.github.com/users/Dhanachandra/events{/privacy}", "followers_url": "https://api.github.com/users/Dhanachandra/followers", "following_url": "https://api.github.com/users/Dhanachandra/following{/other_user}", "gists_url": "https://api.github.com/users/Dhanachandra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dhanachandra", "id": 10828657, "login": "Dhanachandra", "node_id": "MDQ6VXNlcjEwODI4NjU3", "organizations_url": "https://api.github.com/users/Dhanachandra/orgs", "received_events_url": "https://api.github.com/users/Dhanachandra/received_events", "repos_url": "https://api.github.com/users/Dhanachandra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dhanachandra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dhanachandra/subscriptions", "type": "User", "url": "https://api.github.com/users/Dhanachandra" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @Dhanachandra, \r\n\r\nWe have tests for all our metrics and they work as expected: under the hood, we use scikit-learn implementations.\r\n\r\nMaybe the cause is somewhere else. For example:\r\n- Is it a binary or a multiclass or a multilabel classification? Default computation of these metrics is for binary classification; if you would like multiclass or multilabel, you should pass the corresponding parameters; see their documentation (e.g.: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) or code below:\r\n\r\nhttps://huggingface.co/docs/datasets/using_metrics.html#computing-the-metric-scores\r\n\r\n```python\r\nIn [1]: from datasets import load_metric\r\n\r\nIn [2]: precision = load_metric(\"precision\")\r\n\r\nIn [3]: print(precision.inputs_description)\r\n\r\nArgs:\r\n predictions: Predicted labels, as returned by a model.\r\n references: Ground truth labels.\r\n labels: The set of labels to include when average != 'binary', and\r\n their order if average is None. Labels present in the data can\r\n be excluded, for example to calculate a multiclass average ignoring\r\n a majority negative class, while labels not present in the data will\r\n result in 0 components in a macro average. For multilabel targets,\r\n labels are column indices. By default, all labels in y_true and\r\n y_pred are used in sorted order.\r\n average: This parameter is required for multiclass/multilabel targets.\r\n If None, the scores for each class are returned. Otherwise, this\r\n determines the type of averaging performed on the data:\r\n binary: Only report results for the class specified by pos_label.\r\n This is applicable only if targets (y_{true,pred}) are binary.\r\n micro: Calculate metrics globally by counting the total true positives,\r\n false negatives and false positives.\r\n macro: Calculate metrics for each label, and find their unweighted mean.\r\n This does not take label imbalance into account.\r\n weighted: Calculate metrics for each label, and find their average\r\n weighted by support (the number of true instances for each label).\r\n This alters ‘macro’ to account for label imbalance; it can result\r\n in an F-score that is not between precision and recall.\r\n samples: Calculate metrics for each instance, and find their average\r\n (only meaningful for multilabel classification).\r\n sample_weight: Sample weights.\r\n\r\nReturns:\r\n precision: Precision score.\r\n\r\nExamples:\r\n\r\n >>> precision_metric = datasets.load_metric(\"precision\")\r\n >>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])\r\n >>> print(results)\r\n {'precision': 1.0}\r\n\r\n >>> predictions = [0, 2, 1, 0, 0, 1]\r\n >>> references = [0, 1, 2, 0, 1, 2]\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')\r\n >>> print(results)\r\n {'precision': 0.3333333333333333}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')\r\n >>> print(results)\r\n {'precision': 0.2222222222222222}\r\n >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)\r\n >>> print(results)\r\n {'precision': array([0.66666667, 0. , 0. ])}\r\n```\r\n" ]
"2022-01-27T10:14:16Z"
"2022-02-24T09:02:18Z"
"2022-02-24T09:02:17Z"
NONE
null
null
null
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7612903225806451} RECALL: {'recall': 0.7612903225806451} F1: {'f1': 0.7612903225806451} {'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0} **4th Epoch:** 1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s] 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7698924731182796} RECALL: {'recall': 0.7698924731182796} F1: {'f1': 0.7698924731182796} ## Environment info !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt !pip install datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3639/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3000/comments
https://api.github.com/repos/huggingface/datasets/issues/3000/events
https://github.com/huggingface/datasets/pull/3000
1,013,613,219
PR_kwDODunzps4skusL
3,000
Fix json loader when conversion not implemented
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "And we're already at PR number 3,000 ! ^^", "Thank you so much for fixing this @lhoestq 😍 ! I just tested the branch out and it works like a charm!" ]
"2021-10-01T17:47:22Z"
"2021-10-01T18:05:00Z"
"2021-10-01T17:54:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3000.diff", "html_url": "https://github.com/huggingface/datasets/pull/3000", "merged_at": "2021-10-01T17:54:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3000.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3000" }
Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error. By increasing the block size it makes it work again. Hopefully it should help with https://github.com/huggingface/datasets/issues/2799 I tried with the file mentioned in the issue and it worked for me cc @lewtun can you try again from this branch ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3000/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3000/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4338/comments
https://api.github.com/repos/huggingface/datasets/issues/4338/events
https://github.com/huggingface/datasets/pull/4338
1,234,478,851
PR_kwDODunzps43vwsm
4,338
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-12T21:02:08Z"
"2022-05-16T15:51:02Z"
"2022-05-16T15:42:59Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4338.diff", "html_url": "https://github.com/huggingface/datasets/pull/4338", "merged_at": "2022-05-16T15:42:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/4338.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4338" }
Adding evaluation metadata for: - Tweet Eval - Tweets Hate Speech Detection - VCTK - Weibo NER - Wisesight Sentiment - XSum - Yahoo Answers Topics - Yelp Polarity - Yelp Review Full
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4338/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5383/comments
https://api.github.com/repos/huggingface/datasets/issues/5383/events
https://github.com/huggingface/datasets/issues/5383
1,507,293,968
I_kwDODunzps5Z13sQ
5,383
IterableDataset missing column_names, differs from Dataset interface
{ "avatar_url": "https://avatars.githubusercontent.com/u/933687?v=4", "events_url": "https://api.github.com/users/iceboundflame/events{/privacy}", "followers_url": "https://api.github.com/users/iceboundflame/followers", "following_url": "https://api.github.com/users/iceboundflame/following{/other_user}", "gists_url": "https://api.github.com/users/iceboundflame/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iceboundflame", "id": 933687, "login": "iceboundflame", "node_id": "MDQ6VXNlcjkzMzY4Nw==", "organizations_url": "https://api.github.com/users/iceboundflame/orgs", "received_events_url": "https://api.github.com/users/iceboundflame/received_events", "repos_url": "https://api.github.com/users/iceboundflame/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iceboundflame/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iceboundflame/subscriptions", "type": "User", "url": "https://api.github.com/users/iceboundflame" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/50772274?v=4", "events_url": "https://api.github.com/users/patrickloeber/events{/privacy}", "followers_url": "https://api.github.com/users/patrickloeber/followers", "following_url": "https://api.github.com/users/patrickloeber/following{/other_user}", "gists_url": "https://api.github.com/users/patrickloeber/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickloeber", "id": 50772274, "login": "patrickloeber", "node_id": "MDQ6VXNlcjUwNzcyMjc0", "organizations_url": "https://api.github.com/users/patrickloeber/orgs", "received_events_url": "https://api.github.com/users/patrickloeber/received_events", "repos_url": "https://api.github.com/users/patrickloeber/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickloeber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickloeber/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickloeber" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/50772274?v=4", "events_url": "https://api.github.com/users/patrickloeber/events{/privacy}", "followers_url": "https://api.github.com/users/patrickloeber/followers", "following_url": "https://api.github.com/users/patrickloeber/following{/other_user}", "gists_url": "https://api.github.com/users/patrickloeber/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickloeber", "id": 50772274, "login": "patrickloeber", "node_id": "MDQ6VXNlcjUwNzcyMjc0", "organizations_url": "https://api.github.com/users/patrickloeber/orgs", "received_events_url": "https://api.github.com/users/patrickloeber/received_events", "repos_url": "https://api.github.com/users/patrickloeber/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickloeber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickloeber/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickloeber" } ]
null
[ "Another example is that `IterableDataset.map` does not have `fn_kwargs`, among other arguments. It makes it harder to convert code from Dataset to IterableDataset.", "Hi! `fn_kwargs` was added to `IterableDataset.map` in `datasets 2.5.0`, so please update your installation (`pip install -U datasets`) to use it.\r\n\r\nRegarding `column_names`, I agree we should add this property to `IterableDataset`. In the meantime, you can use `list(dataset.features.keys())` instead.", "Thanks! That's great news.\n\nOn Thu, Dec 22, 2022, 07:48 Mario Šaško ***@***.***> wrote:\n\n> Hi! fn_kwargs was added to IterableDataset.map in datasets 2.5.0, so\n> please update your installation (pip install -U datasets) to use it.\n>\n> Regarding column_names, I agree we should add this property to\n> IterableDataset. In the meantime, you can use\n> list(dataset.features.keys()) instead.\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5383#issuecomment-1362993633>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AAHD6N2EQUFEOUFDW3VHSILWORZ45ANCNFSM6AAAAAATGKWVGM>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "I'm marking this issue as a \"good first issue\", as it makes sense to have `IterableDataset.column_names` in the API. Besides the case when `features` are `None` (e.g., `features` are `None` after `map`), in which we can also return `column_names` as `None`, adding this property should be straightforward,", "Hi @mariosasko, I can work on this if that's ok?", "Yes! I've assigned you the issue." ]
"2022-12-22T05:27:02Z"
"2023-03-13T19:03:33Z"
"2023-03-13T19:03:33Z"
NONE
null
null
null
### Describe the bug The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like ``` dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...) ``` will not work because `.column_names` does not exist on IterableDataset. I cannot find any clear explanation on why this is not available, is it an oversight? We do have `iterable_ds.features` available. ### Steps to reproduce the bug See above ### Expected behavior Dataset and IterableDataset would be expected to have the same interface, with any differences noted in the documentation. ### Environment info n/a
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5383/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5383/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1029/comments
https://api.github.com/repos/huggingface/datasets/issues/1029/events
https://github.com/huggingface/datasets/pull/1029
755,767,616
MDExOlB1bGxSZXF1ZXN0NTMxNDE2NzE4
1,029
Add PEC
{ "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}", "followers_url": "https://api.github.com/users/zhongpeixiang/followers", "following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}", "gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhongpeixiang", "id": 11826803, "login": "zhongpeixiang", "node_id": "MDQ6VXNlcjExODI2ODAz", "organizations_url": "https://api.github.com/users/zhongpeixiang/orgs", "received_events_url": "https://api.github.com/users/zhongpeixiang/received_events", "repos_url": "https://api.github.com/users/zhongpeixiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions", "type": "User", "url": "https://api.github.com/users/zhongpeixiang" }
[]
closed
false
null
[]
null
[ "I'm a bit frustrated now to get this right.", "Hey @zhongpeixiang!\r\nReally nice addition here!\r\n\r\nDid you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\nI can't seem to find you there! Should I add you directly with your gmail address?", "> Hey @zhongpeixiang!\r\n> Really nice addition here!\r\n> \r\n> Did you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n> I can't seem to find you there! Should I add you directly with your gmail address?\r\n\r\nThank you for the invitation. This initiative is awesome. Sadly I’m occupied by my thesis writing this month. Good luck 🤗", "As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)", "> As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)\r\n\r\nOh, I misunderstood the post. I'm glad to join." ]
"2020-12-03T02:46:08Z"
"2020-12-04T10:58:19Z"
"2020-12-03T16:15:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1029.diff", "html_url": "https://github.com/huggingface/datasets/pull/1029", "merged_at": "2020-12-03T16:15:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1029.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1029" }
A persona-based empathetic conversation dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1029/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1029/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2332/comments
https://api.github.com/repos/huggingface/datasets/issues/2332/events
https://github.com/huggingface/datasets/pull/2332
879,041,608
MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4
2,332
Add note about indices mapping in save_to_disk docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2021-05-07T13:49:42Z"
"2021-05-07T17:20:48Z"
"2021-05-07T17:20:48Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2332.diff", "html_url": "https://github.com/huggingface/datasets/pull/2332", "merged_at": "2021-05-07T17:20:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2332.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2332" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2332/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1473/comments
https://api.github.com/repos/huggingface/datasets/issues/1473/events
https://github.com/huggingface/datasets/pull/1473
762,055,694
MDExOlB1bGxSZXF1ZXN0NTM2NjQyODI5
1,473
add srwac
{ "avatar_url": "https://avatars.githubusercontent.com/u/11391118?v=4", "events_url": "https://api.github.com/users/IvanZidov/events{/privacy}", "followers_url": "https://api.github.com/users/IvanZidov/followers", "following_url": "https://api.github.com/users/IvanZidov/following{/other_user}", "gists_url": "https://api.github.com/users/IvanZidov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/IvanZidov", "id": 11391118, "login": "IvanZidov", "node_id": "MDQ6VXNlcjExMzkxMTE4", "organizations_url": "https://api.github.com/users/IvanZidov/orgs", "received_events_url": "https://api.github.com/users/IvanZidov/received_events", "repos_url": "https://api.github.com/users/IvanZidov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/IvanZidov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IvanZidov/subscriptions", "type": "User", "url": "https://api.github.com/users/IvanZidov" }
[]
closed
false
null
[]
null
[ "Connection error failed. Need rerun", "merging since the CI is fixed on master" ]
"2020-12-11T08:20:29Z"
"2020-12-17T11:40:59Z"
"2020-12-17T11:40:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1473.diff", "html_url": "https://github.com/huggingface/datasets/pull/1473", "merged_at": "2020-12-17T11:40:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1473" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1473/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1183/comments
https://api.github.com/repos/huggingface/datasets/issues/1183/events
https://github.com/huggingface/datasets/pull/1183
757,806,570
MDExOlB1bGxSZXF1ZXN0NTMzMTEwOTY4
1,183
add mkb dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}", "followers_url": "https://api.github.com/users/thevasudevgupta/followers", "following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}", "gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thevasudevgupta", "id": 53136577, "login": "thevasudevgupta", "node_id": "MDQ6VXNlcjUzMTM2NTc3", "organizations_url": "https://api.github.com/users/thevasudevgupta/orgs", "received_events_url": "https://api.github.com/users/thevasudevgupta/received_events", "repos_url": "https://api.github.com/users/thevasudevgupta/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions", "type": "User", "url": "https://api.github.com/users/thevasudevgupta" }
[]
closed
false
null
[]
null
[ "Could you update the languages tags before we merge @VasudevGupta7 ?", "done.", "thanks !" ]
"2020-12-05T23:44:33Z"
"2020-12-09T09:38:50Z"
"2020-12-09T09:38:50Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1183.diff", "html_url": "https://github.com/huggingface/datasets/pull/1183", "merged_at": "2020-12-09T09:38:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/1183.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1183" }
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1183/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6458/comments
https://api.github.com/repos/huggingface/datasets/issues/6458/events
https://github.com/huggingface/datasets/pull/6458
2,016,577,761
PR_kwDODunzps5gqy4M
6,458
Lazy data files resolution
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005097 / 0.011353 (-0.006256) | 0.003523 / 0.011008 (-0.007485) | 0.062827 / 0.038508 (0.024319) | 0.051677 / 0.023109 (0.028568) | 0.248919 / 0.275898 (-0.026980) | 0.275892 / 0.323480 (-0.047588) | 0.003908 / 0.007986 (-0.004077) | 0.002622 / 0.004328 (-0.001706) | 0.048634 / 0.004250 (0.044383) | 0.037903 / 0.037052 (0.000850) | 0.255754 / 0.258489 (-0.002735) | 0.283343 / 0.293841 (-0.010498) | 0.027886 / 0.128546 (-0.100660) | 0.010849 / 0.075646 (-0.064797) | 0.208255 / 0.419271 (-0.211017) | 0.035664 / 0.043533 (-0.007869) | 0.254661 / 0.255139 (-0.000478) | 0.274366 / 0.283200 (-0.008834) | 0.017240 / 0.141683 (-0.124443) | 1.092952 / 1.452155 (-0.359203) | 1.148373 / 1.492716 (-0.344344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091592 / 0.018006 (0.073586) | 0.301926 / 0.000490 (0.301436) | 0.000207 / 0.000200 (0.000007) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018525 / 0.037411 (-0.018887) | 0.060539 / 0.014526 (0.046014) | 0.073812 / 0.176557 (-0.102745) | 0.120655 / 0.737135 (-0.616480) | 0.076931 / 0.296338 (-0.219407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282797 / 0.215209 (0.067588) | 2.746573 / 2.077655 (0.668918) | 1.477652 / 1.504120 (-0.026468) | 1.349922 / 1.541195 (-0.191273) | 1.374347 / 1.468490 (-0.094143) | 0.574096 / 4.584777 (-4.010681) | 2.383317 / 3.745712 (-1.362395) | 2.809320 / 5.269862 (-2.460541) | 1.758947 / 4.565676 (-2.806729) | 0.064029 / 0.424275 (-0.360246) | 0.004936 / 0.007607 (-0.002672) | 0.331403 / 0.226044 (0.105358) | 3.260908 / 2.268929 (0.991980) | 1.817670 / 55.444624 (-53.626954) | 1.525863 / 6.876477 (-5.350613) | 1.542017 / 2.142072 (-0.600055) | 0.638900 / 4.805227 (-4.166327) | 0.119485 / 6.500664 (-6.381179) | 0.042588 / 0.075469 (-0.032881) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.951583 / 1.841788 (-0.890205) | 11.621917 / 8.074308 (3.547609) | 10.511062 / 10.191392 (0.319670) | 0.130137 / 0.680424 (-0.550287) | 0.014048 / 0.534201 (-0.520153) | 0.290621 / 0.579283 (-0.288662) | 0.271665 / 0.434364 (-0.162699) | 0.331260 / 0.540337 (-0.209077) | 0.441621 / 1.386936 (-0.945316) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005272 / 0.011353 (-0.006081) | 0.003656 / 0.011008 (-0.007352) | 0.049245 / 0.038508 (0.010737) | 0.054130 / 0.023109 (0.031021) | 0.274775 / 0.275898 (-0.001123) | 0.296664 / 0.323480 (-0.026816) | 0.004870 / 0.007986 (-0.003115) | 0.002728 / 0.004328 (-0.001601) | 0.048087 / 0.004250 (0.043837) | 0.041448 / 0.037052 (0.004396) | 0.279110 / 0.258489 (0.020621) | 0.303660 / 0.293841 (0.009819) | 0.029767 / 0.128546 (-0.098779) | 0.010799 / 0.075646 (-0.064848) | 0.058650 / 0.419271 (-0.360622) | 0.033088 / 0.043533 (-0.010445) | 0.274456 / 0.255139 (0.019317) | 0.290206 / 0.283200 (0.007007) | 0.017259 / 0.141683 (-0.124424) | 1.176501 / 1.452155 (-0.275654) | 1.197552 / 1.492716 (-0.295165) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092865 / 0.018006 (0.074859) | 0.302437 / 0.000490 (0.301947) | 0.000209 / 0.000200 (0.000009) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021211 / 0.037411 (-0.016200) | 0.068858 / 0.014526 (0.054332) | 0.081783 / 0.176557 (-0.094773) | 0.120472 / 0.737135 (-0.616663) | 0.083900 / 0.296338 (-0.212438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295157 / 0.215209 (0.079948) | 2.910979 / 2.077655 (0.833324) | 1.575772 / 1.504120 (0.071652) | 1.456955 / 1.541195 (-0.084239) | 1.468982 / 1.468490 (0.000492) | 0.560309 / 4.584777 (-4.024468) | 2.460171 / 3.745712 (-1.285541) | 2.805713 / 5.269862 (-2.464149) | 1.754074 / 4.565676 (-2.811603) | 0.063333 / 0.424275 (-0.360942) | 0.004940 / 0.007607 (-0.002667) | 0.346141 / 0.226044 (0.120097) | 3.463431 / 2.268929 (1.194502) | 1.929135 / 55.444624 (-53.515490) | 1.660191 / 6.876477 (-5.216286) | 1.668327 / 2.142072 (-0.473746) | 0.644183 / 4.805227 (-4.161044) | 0.115738 / 6.500664 (-6.384926) | 0.041347 / 0.075469 (-0.034122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961565 / 1.841788 (-0.880222) | 12.232589 / 8.074308 (4.158281) | 10.778774 / 10.191392 (0.587382) | 0.132709 / 0.680424 (-0.547715) | 0.015964 / 0.534201 (-0.518237) | 0.286944 / 0.579283 (-0.292340) | 0.279740 / 0.434364 (-0.154624) | 0.333024 / 0.540337 (-0.207314) | 0.438819 / 1.386936 (-0.948117) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#51002cb0325772adaf46d6f3ce01d41c01b51079 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6458). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005317 / 0.011353 (-0.006036) | 0.003936 / 0.011008 (-0.007072) | 0.063122 / 0.038508 (0.024614) | 0.061274 / 0.023109 (0.038165) | 0.251764 / 0.275898 (-0.024134) | 0.274849 / 0.323480 (-0.048631) | 0.004059 / 0.007986 (-0.003927) | 0.002874 / 0.004328 (-0.001455) | 0.048716 / 0.004250 (0.044465) | 0.038281 / 0.037052 (0.001228) | 0.265224 / 0.258489 (0.006735) | 0.285962 / 0.293841 (-0.007878) | 0.028522 / 0.128546 (-0.100024) | 0.011150 / 0.075646 (-0.064496) | 0.208362 / 0.419271 (-0.210910) | 0.038900 / 0.043533 (-0.004633) | 0.254113 / 0.255139 (-0.001026) | 0.276721 / 0.283200 (-0.006478) | 0.018372 / 0.141683 (-0.123311) | 1.121336 / 1.452155 (-0.330818) | 1.189548 / 1.492716 (-0.303168) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097633 / 0.018006 (0.079627) | 0.304443 / 0.000490 (0.303953) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021757 / 0.037411 (-0.015654) | 0.061978 / 0.014526 (0.047453) | 0.076296 / 0.176557 (-0.100260) | 0.122320 / 0.737135 (-0.614816) | 0.076738 / 0.296338 (-0.219601) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284328 / 0.215209 (0.069119) | 2.793071 / 2.077655 (0.715417) | 1.504768 / 1.504120 (0.000648) | 1.386083 / 1.541195 (-0.155111) | 1.457593 / 1.468490 (-0.010897) | 0.575887 / 4.584777 (-4.008890) | 2.419396 / 3.745712 (-1.326316) | 2.931305 / 5.269862 (-2.338556) | 1.840759 / 4.565676 (-2.724917) | 0.063801 / 0.424275 (-0.360474) | 0.004966 / 0.007607 (-0.002641) | 0.341612 / 0.226044 (0.115568) | 3.402842 / 2.268929 (1.133913) | 1.860521 / 55.444624 (-53.584103) | 1.603156 / 6.876477 (-5.273321) | 1.665835 / 2.142072 (-0.476237) | 0.655299 / 4.805227 (-4.149929) | 0.124527 / 6.500664 (-6.376137) | 0.044021 / 0.075469 (-0.031449) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972068 / 1.841788 (-0.869720) | 12.393202 / 8.074308 (4.318894) | 10.420876 / 10.191392 (0.229484) | 0.140684 / 0.680424 (-0.539740) | 0.014442 / 0.534201 (-0.519759) | 0.288182 / 0.579283 (-0.291101) | 0.265029 / 0.434364 (-0.169334) | 0.327133 / 0.540337 (-0.213204) | 0.443403 / 1.386936 (-0.943533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005559 / 0.011353 (-0.005794) | 0.004046 / 0.011008 (-0.006962) | 0.048991 / 0.038508 (0.010483) | 0.059576 / 0.023109 (0.036467) | 0.273596 / 0.275898 (-0.002302) | 0.296658 / 0.323480 (-0.026822) | 0.004089 / 0.007986 (-0.003897) | 0.002777 / 0.004328 (-0.001551) | 0.048216 / 0.004250 (0.043966) | 0.043200 / 0.037052 (0.006148) | 0.276815 / 0.258489 (0.018326) | 0.300570 / 0.293841 (0.006729) | 0.030250 / 0.128546 (-0.098296) | 0.011322 / 0.075646 (-0.064324) | 0.057843 / 0.419271 (-0.361429) | 0.033366 / 0.043533 (-0.010167) | 0.275636 / 0.255139 (0.020497) | 0.293750 / 0.283200 (0.010550) | 0.018551 / 0.141683 (-0.123132) | 1.160919 / 1.452155 (-0.291236) | 1.214519 / 1.492716 (-0.278197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100074 / 0.018006 (0.082068) | 0.308434 / 0.000490 (0.307944) | 0.000232 / 0.000200 (0.000032) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022600 / 0.037411 (-0.014811) | 0.070506 / 0.014526 (0.055980) | 0.081185 / 0.176557 (-0.095371) | 0.120688 / 0.737135 (-0.616448) | 0.082897 / 0.296338 (-0.213441) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306661 / 0.215209 (0.091452) | 2.989656 / 2.077655 (0.912001) | 1.618868 / 1.504120 (0.114749) | 1.485045 / 1.541195 (-0.056149) | 1.549359 / 1.468490 (0.080869) | 0.593596 / 4.584777 (-3.991181) | 2.466215 / 3.745712 (-1.279497) | 2.956570 / 5.269862 (-2.313292) | 1.823160 / 4.565676 (-2.742516) | 0.063442 / 0.424275 (-0.360833) | 0.004928 / 0.007607 (-0.002679) | 0.358464 / 0.226044 (0.132419) | 3.566345 / 2.268929 (1.297417) | 2.006784 / 55.444624 (-53.437840) | 1.687091 / 6.876477 (-5.189386) | 1.729464 / 2.142072 (-0.412609) | 0.655656 / 4.805227 (-4.149572) | 0.119044 / 6.500664 (-6.381620) | 0.042782 / 0.075469 (-0.032687) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974937 / 1.841788 (-0.866850) | 12.992888 / 8.074308 (4.918580) | 10.893713 / 10.191392 (0.702321) | 0.133853 / 0.680424 (-0.546570) | 0.016055 / 0.534201 (-0.518145) | 0.289342 / 0.579283 (-0.289941) | 0.286094 / 0.434364 (-0.148270) | 0.328670 / 0.540337 (-0.211667) | 0.444605 / 1.386936 (-0.942331) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5a5bb38bcc71ea21f2d7304aab374fdb81ded463 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005705 / 0.011353 (-0.005648) | 0.003519 / 0.011008 (-0.007489) | 0.062009 / 0.038508 (0.023501) | 0.053481 / 0.023109 (0.030372) | 0.262669 / 0.275898 (-0.013229) | 0.280290 / 0.323480 (-0.043189) | 0.002957 / 0.007986 (-0.005029) | 0.002587 / 0.004328 (-0.001741) | 0.047876 / 0.004250 (0.043626) | 0.038868 / 0.037052 (0.001815) | 0.267854 / 0.258489 (0.009365) | 0.290430 / 0.293841 (-0.003411) | 0.028120 / 0.128546 (-0.100427) | 0.011042 / 0.075646 (-0.064605) | 0.206113 / 0.419271 (-0.213158) | 0.036039 / 0.043533 (-0.007494) | 0.257715 / 0.255139 (0.002576) | 0.281279 / 0.283200 (-0.001921) | 0.019790 / 0.141683 (-0.121893) | 1.114472 / 1.452155 (-0.337683) | 1.192219 / 1.492716 (-0.300497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091049 / 0.018006 (0.073043) | 0.300846 / 0.000490 (0.300356) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018569 / 0.037411 (-0.018843) | 0.060075 / 0.014526 (0.045549) | 0.073877 / 0.176557 (-0.102680) | 0.120337 / 0.737135 (-0.616799) | 0.075454 / 0.296338 (-0.220884) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290084 / 0.215209 (0.074875) | 2.805712 / 2.077655 (0.728057) | 1.459393 / 1.504120 (-0.044727) | 1.327356 / 1.541195 (-0.213838) | 1.384734 / 1.468490 (-0.083756) | 0.574532 / 4.584777 (-4.010245) | 2.419696 / 3.745712 (-1.326016) | 2.805449 / 5.269862 (-2.464412) | 1.764127 / 4.565676 (-2.801549) | 0.063256 / 0.424275 (-0.361020) | 0.004954 / 0.007607 (-0.002653) | 0.344246 / 0.226044 (0.118202) | 3.396050 / 2.268929 (1.127121) | 1.807621 / 55.444624 (-53.637004) | 1.536627 / 6.876477 (-5.339850) | 1.552450 / 2.142072 (-0.589623) | 0.651156 / 4.805227 (-4.154071) | 0.119358 / 6.500664 (-6.381306) | 0.042810 / 0.075469 (-0.032660) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930646 / 1.841788 (-0.911142) | 11.830454 / 8.074308 (3.756146) | 10.615315 / 10.191392 (0.423923) | 0.130617 / 0.680424 (-0.549807) | 0.014081 / 0.534201 (-0.520120) | 0.285027 / 0.579283 (-0.294256) | 0.267296 / 0.434364 (-0.167068) | 0.331478 / 0.540337 (-0.208859) | 0.442676 / 1.386936 (-0.944260) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005340 / 0.011353 (-0.006013) | 0.003745 / 0.011008 (-0.007264) | 0.049011 / 0.038508 (0.010503) | 0.051342 / 0.023109 (0.028233) | 0.272482 / 0.275898 (-0.003416) | 0.292816 / 0.323480 (-0.030663) | 0.003977 / 0.007986 (-0.004008) | 0.002642 / 0.004328 (-0.001687) | 0.048213 / 0.004250 (0.043963) | 0.040341 / 0.037052 (0.003289) | 0.275176 / 0.258489 (0.016687) | 0.301098 / 0.293841 (0.007257) | 0.029052 / 0.128546 (-0.099495) | 0.010796 / 0.075646 (-0.064850) | 0.057654 / 0.419271 (-0.361618) | 0.032914 / 0.043533 (-0.010619) | 0.271235 / 0.255139 (0.016096) | 0.289883 / 0.283200 (0.006684) | 0.018548 / 0.141683 (-0.123135) | 1.134072 / 1.452155 (-0.318083) | 1.208228 / 1.492716 (-0.284488) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094524 / 0.018006 (0.076518) | 0.310162 / 0.000490 (0.309672) | 0.000237 / 0.000200 (0.000037) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021090 / 0.037411 (-0.016321) | 0.068351 / 0.014526 (0.053825) | 0.082370 / 0.176557 (-0.094186) | 0.121648 / 0.737135 (-0.615487) | 0.083433 / 0.296338 (-0.212906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294616 / 0.215209 (0.079407) | 2.894194 / 2.077655 (0.816539) | 1.619739 / 1.504120 (0.115619) | 1.492466 / 1.541195 (-0.048729) | 1.511662 / 1.468490 (0.043172) | 0.557179 / 4.584777 (-4.027597) | 2.400669 / 3.745712 (-1.345043) | 2.781363 / 5.269862 (-2.488499) | 1.769144 / 4.565676 (-2.796533) | 0.063996 / 0.424275 (-0.360279) | 0.004922 / 0.007607 (-0.002685) | 0.354483 / 0.226044 (0.128438) | 3.474795 / 2.268929 (1.205867) | 1.985743 / 55.444624 (-53.458881) | 1.693173 / 6.876477 (-5.183303) | 1.695857 / 2.142072 (-0.446216) | 0.654800 / 4.805227 (-4.150427) | 0.117316 / 6.500664 (-6.383348) | 0.040708 / 0.075469 (-0.034761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977678 / 1.841788 (-0.864109) | 12.214098 / 8.074308 (4.139790) | 10.741857 / 10.191392 (0.550465) | 0.130308 / 0.680424 (-0.550116) | 0.015053 / 0.534201 (-0.519148) | 0.295496 / 0.579283 (-0.283787) | 0.276348 / 0.434364 (-0.158015) | 0.326568 / 0.540337 (-0.213769) | 0.441902 / 1.386936 (-0.945034) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#214a3e6dcb66e9c1a8ff586553e8eee0f1c70710 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005218 / 0.011353 (-0.006135) | 0.003270 / 0.011008 (-0.007738) | 0.062380 / 0.038508 (0.023872) | 0.052896 / 0.023109 (0.029787) | 0.233060 / 0.275898 (-0.042838) | 0.259194 / 0.323480 (-0.064286) | 0.002880 / 0.007986 (-0.005106) | 0.002643 / 0.004328 (-0.001686) | 0.048084 / 0.004250 (0.043833) | 0.038807 / 0.037052 (0.001755) | 0.244925 / 0.258489 (-0.013564) | 0.269619 / 0.293841 (-0.024222) | 0.026901 / 0.128546 (-0.101646) | 0.010150 / 0.075646 (-0.065497) | 0.206854 / 0.419271 (-0.212417) | 0.035618 / 0.043533 (-0.007915) | 0.239577 / 0.255139 (-0.015562) | 0.259684 / 0.283200 (-0.023516) | 0.019823 / 0.141683 (-0.121860) | 1.074472 / 1.452155 (-0.377682) | 1.142911 / 1.492716 (-0.349805) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092616 / 0.018006 (0.074610) | 0.301974 / 0.000490 (0.301485) | 0.000201 / 0.000200 (0.000002) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018864 / 0.037411 (-0.018548) | 0.061007 / 0.014526 (0.046481) | 0.073228 / 0.176557 (-0.103328) | 0.120719 / 0.737135 (-0.616416) | 0.075686 / 0.296338 (-0.220653) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281404 / 0.215209 (0.066195) | 2.777671 / 2.077655 (0.700017) | 1.464689 / 1.504120 (-0.039431) | 1.345357 / 1.541195 (-0.195838) | 1.384273 / 1.468490 (-0.084217) | 0.560298 / 4.584777 (-4.024479) | 2.389877 / 3.745712 (-1.355835) | 2.755564 / 5.269862 (-2.514297) | 1.737754 / 4.565676 (-2.827922) | 0.063025 / 0.424275 (-0.361251) | 0.004975 / 0.007607 (-0.002632) | 0.346741 / 0.226044 (0.120697) | 3.321918 / 2.268929 (1.052989) | 1.815700 / 55.444624 (-53.628924) | 1.547333 / 6.876477 (-5.329144) | 1.564809 / 2.142072 (-0.577263) | 0.638645 / 4.805227 (-4.166582) | 0.118157 / 6.500664 (-6.382507) | 0.041605 / 0.075469 (-0.033864) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.942515 / 1.841788 (-0.899273) | 11.400386 / 8.074308 (3.326078) | 10.208763 / 10.191392 (0.017370) | 0.138144 / 0.680424 (-0.542280) | 0.014354 / 0.534201 (-0.519847) | 0.288289 / 0.579283 (-0.290994) | 0.265973 / 0.434364 (-0.168391) | 0.327703 / 0.540337 (-0.212634) | 0.435474 / 1.386936 (-0.951462) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005163 / 0.011353 (-0.006190) | 0.003307 / 0.011008 (-0.007701) | 0.048885 / 0.038508 (0.010377) | 0.049044 / 0.023109 (0.025935) | 0.261408 / 0.275898 (-0.014490) | 0.284625 / 0.323480 (-0.038855) | 0.003970 / 0.007986 (-0.004015) | 0.002754 / 0.004328 (-0.001575) | 0.048271 / 0.004250 (0.044021) | 0.039849 / 0.037052 (0.002797) | 0.266898 / 0.258489 (0.008409) | 0.291445 / 0.293841 (-0.002396) | 0.028477 / 0.128546 (-0.100069) | 0.010656 / 0.075646 (-0.064990) | 0.057732 / 0.419271 (-0.361539) | 0.033298 / 0.043533 (-0.010235) | 0.297773 / 0.255139 (0.042634) | 0.281894 / 0.283200 (-0.001305) | 0.018595 / 0.141683 (-0.123088) | 1.168849 / 1.452155 (-0.283306) | 1.183493 / 1.492716 (-0.309224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092683 / 0.018006 (0.074677) | 0.300387 / 0.000490 (0.299897) | 0.000221 / 0.000200 (0.000021) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021356 / 0.037411 (-0.016055) | 0.068095 / 0.014526 (0.053569) | 0.079806 / 0.176557 (-0.096750) | 0.118965 / 0.737135 (-0.618170) | 0.082066 / 0.296338 (-0.214273) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293105 / 0.215209 (0.077896) | 2.842800 / 2.077655 (0.765146) | 1.572052 / 1.504120 (0.067932) | 1.450156 / 1.541195 (-0.091038) | 1.464227 / 1.468490 (-0.004263) | 0.561215 / 4.584777 (-4.023562) | 2.456117 / 3.745712 (-1.289596) | 2.739766 / 5.269862 (-2.530095) | 1.730354 / 4.565676 (-2.835323) | 0.062636 / 0.424275 (-0.361639) | 0.004933 / 0.007607 (-0.002674) | 0.345800 / 0.226044 (0.119756) | 3.415858 / 2.268929 (1.146929) | 1.937288 / 55.444624 (-53.507336) | 1.661975 / 6.876477 (-5.214502) | 1.660347 / 2.142072 (-0.481726) | 0.642780 / 4.805227 (-4.162448) | 0.116643 / 6.500664 (-6.384021) | 0.041282 / 0.075469 (-0.034187) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976629 / 1.841788 (-0.865159) | 11.900319 / 8.074308 (3.826011) | 10.574198 / 10.191392 (0.382806) | 0.129689 / 0.680424 (-0.550735) | 0.015390 / 0.534201 (-0.518811) | 0.286543 / 0.579283 (-0.292741) | 0.277676 / 0.434364 (-0.156688) | 0.325053 / 0.540337 (-0.215284) | 0.439663 / 1.386936 (-0.947274) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7a9674e17156ff10124632ba705125288de7442 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005971) | 0.003606 / 0.011008 (-0.007402) | 0.063234 / 0.038508 (0.024726) | 0.053738 / 0.023109 (0.030629) | 0.250405 / 0.275898 (-0.025493) | 0.272244 / 0.323480 (-0.051236) | 0.002896 / 0.007986 (-0.005090) | 0.002684 / 0.004328 (-0.001644) | 0.048394 / 0.004250 (0.044143) | 0.039017 / 0.037052 (0.001964) | 0.259554 / 0.258489 (0.001065) | 0.287215 / 0.293841 (-0.006626) | 0.028290 / 0.128546 (-0.100257) | 0.011482 / 0.075646 (-0.064164) | 0.214264 / 0.419271 (-0.205007) | 0.036257 / 0.043533 (-0.007276) | 0.252873 / 0.255139 (-0.002266) | 0.271269 / 0.283200 (-0.011931) | 0.017173 / 0.141683 (-0.124510) | 1.137474 / 1.452155 (-0.314681) | 1.161499 / 1.492716 (-0.331217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092424 / 0.018006 (0.074418) | 0.283703 / 0.000490 (0.283213) | 0.000209 / 0.000200 (0.000009) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018307 / 0.037411 (-0.019105) | 0.060780 / 0.014526 (0.046254) | 0.073984 / 0.176557 (-0.102573) | 0.120824 / 0.737135 (-0.616311) | 0.074724 / 0.296338 (-0.221615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297682 / 0.215209 (0.082473) | 2.853267 / 2.077655 (0.775612) | 1.567643 / 1.504120 (0.063523) | 1.437218 / 1.541195 (-0.103976) | 1.467187 / 1.468490 (-0.001304) | 0.560552 / 4.584777 (-4.024225) | 2.387848 / 3.745712 (-1.357864) | 2.718946 / 5.269862 (-2.550916) | 1.724107 / 4.565676 (-2.841570) | 0.061923 / 0.424275 (-0.362352) | 0.004828 / 0.007607 (-0.002779) | 0.353916 / 0.226044 (0.127871) | 3.404477 / 2.268929 (1.135548) | 1.906078 / 55.444624 (-53.538546) | 1.629686 / 6.876477 (-5.246791) | 1.640839 / 2.142072 (-0.501233) | 0.641082 / 4.805227 (-4.164145) | 0.118078 / 6.500664 (-6.382586) | 0.041881 / 0.075469 (-0.033588) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.936062 / 1.841788 (-0.905726) | 11.397678 / 8.074308 (3.323370) | 10.385159 / 10.191392 (0.193766) | 0.127337 / 0.680424 (-0.553087) | 0.013562 / 0.534201 (-0.520639) | 0.290817 / 0.579283 (-0.288466) | 0.259377 / 0.434364 (-0.174987) | 0.324829 / 0.540337 (-0.215508) | 0.434344 / 1.386936 (-0.952592) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005134 / 0.011353 (-0.006219) | 0.003404 / 0.011008 (-0.007604) | 0.048281 / 0.038508 (0.009772) | 0.050952 / 0.023109 (0.027842) | 0.277553 / 0.275898 (0.001655) | 0.298855 / 0.323480 (-0.024625) | 0.003928 / 0.007986 (-0.004058) | 0.002642 / 0.004328 (-0.001687) | 0.047374 / 0.004250 (0.043123) | 0.039883 / 0.037052 (0.002831) | 0.279808 / 0.258489 (0.021318) | 0.301604 / 0.293841 (0.007763) | 0.028708 / 0.128546 (-0.099838) | 0.010949 / 0.075646 (-0.064697) | 0.057090 / 0.419271 (-0.362181) | 0.032438 / 0.043533 (-0.011095) | 0.274690 / 0.255139 (0.019551) | 0.290912 / 0.283200 (0.007712) | 0.017556 / 0.141683 (-0.124127) | 1.111091 / 1.452155 (-0.341064) | 1.166063 / 1.492716 (-0.326653) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090557 / 0.018006 (0.072551) | 0.298661 / 0.000490 (0.298171) | 0.000228 / 0.000200 (0.000028) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021712 / 0.037411 (-0.015699) | 0.068682 / 0.014526 (0.054156) | 0.080108 / 0.176557 (-0.096449) | 0.119480 / 0.737135 (-0.617655) | 0.082703 / 0.296338 (-0.213636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294095 / 0.215209 (0.078886) | 2.884758 / 2.077655 (0.807103) | 1.598312 / 1.504120 (0.094192) | 1.480050 / 1.541195 (-0.061145) | 1.488611 / 1.468490 (0.020121) | 0.556052 / 4.584777 (-4.028724) | 2.435484 / 3.745712 (-1.310228) | 2.741592 / 5.269862 (-2.528270) | 1.706223 / 4.565676 (-2.859454) | 0.062214 / 0.424275 (-0.362061) | 0.004901 / 0.007607 (-0.002706) | 0.346301 / 0.226044 (0.120257) | 3.474516 / 2.268929 (1.205587) | 1.995205 / 55.444624 (-53.449419) | 1.726349 / 6.876477 (-5.150128) | 1.659600 / 2.142072 (-0.482472) | 0.643560 / 4.805227 (-4.161667) | 0.115222 / 6.500664 (-6.385442) | 0.041137 / 0.075469 (-0.034332) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974566 / 1.841788 (-0.867221) | 11.872479 / 8.074308 (3.798171) | 10.496919 / 10.191392 (0.305527) | 0.129087 / 0.680424 (-0.551337) | 0.014627 / 0.534201 (-0.519574) | 0.289070 / 0.579283 (-0.290213) | 0.269609 / 0.434364 (-0.164755) | 0.327785 / 0.540337 (-0.212553) | 0.444634 / 1.386936 (-0.942302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32e0960ea165a9481b1ff6eed31771475120cb38 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005080 / 0.011353 (-0.006273) | 0.003782 / 0.011008 (-0.007226) | 0.062816 / 0.038508 (0.024308) | 0.056338 / 0.023109 (0.033229) | 0.251317 / 0.275898 (-0.024581) | 0.269414 / 0.323480 (-0.054066) | 0.003984 / 0.007986 (-0.004001) | 0.002749 / 0.004328 (-0.001580) | 0.048126 / 0.004250 (0.043876) | 0.038516 / 0.037052 (0.001464) | 0.253809 / 0.258489 (-0.004680) | 0.283309 / 0.293841 (-0.010532) | 0.027015 / 0.128546 (-0.101531) | 0.010610 / 0.075646 (-0.065037) | 0.213024 / 0.419271 (-0.206247) | 0.035734 / 0.043533 (-0.007799) | 0.247909 / 0.255139 (-0.007230) | 0.263539 / 0.283200 (-0.019660) | 0.018408 / 0.141683 (-0.123275) | 1.104366 / 1.452155 (-0.347789) | 1.169668 / 1.492716 (-0.323048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.114366 / 0.018006 (0.096360) | 0.317674 / 0.000490 (0.317184) | 0.000227 / 0.000200 (0.000027) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018955 / 0.037411 (-0.018457) | 0.060716 / 0.014526 (0.046190) | 0.072963 / 0.176557 (-0.103593) | 0.121671 / 0.737135 (-0.615464) | 0.073785 / 0.296338 (-0.222554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292349 / 0.215209 (0.077140) | 2.832049 / 2.077655 (0.754394) | 1.504488 / 1.504120 (0.000368) | 1.403418 / 1.541195 (-0.137777) | 1.449223 / 1.468490 (-0.019267) | 0.563846 / 4.584777 (-4.020931) | 2.376726 / 3.745712 (-1.368986) | 2.823304 / 5.269862 (-2.446558) | 1.774858 / 4.565676 (-2.790818) | 0.063229 / 0.424275 (-0.361046) | 0.004923 / 0.007607 (-0.002684) | 0.347240 / 0.226044 (0.121195) | 3.486563 / 2.268929 (1.217634) | 1.890516 / 55.444624 (-53.554109) | 1.570620 / 6.876477 (-5.305857) | 1.600842 / 2.142072 (-0.541231) | 0.644287 / 4.805227 (-4.160940) | 0.116931 / 6.500664 (-6.383733) | 0.042068 / 0.075469 (-0.033401) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935662 / 1.841788 (-0.906126) | 11.950247 / 8.074308 (3.875939) | 10.636225 / 10.191392 (0.444833) | 0.139137 / 0.680424 (-0.541287) | 0.014473 / 0.534201 (-0.519728) | 0.294213 / 0.579283 (-0.285070) | 0.273413 / 0.434364 (-0.160951) | 0.325930 / 0.540337 (-0.214407) | 0.444265 / 1.386936 (-0.942671) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005448 / 0.011353 (-0.005904) | 0.003155 / 0.011008 (-0.007853) | 0.048626 / 0.038508 (0.010117) | 0.057427 / 0.023109 (0.034318) | 0.270412 / 0.275898 (-0.005486) | 0.290816 / 0.323480 (-0.032664) | 0.004744 / 0.007986 (-0.003241) | 0.002776 / 0.004328 (-0.001552) | 0.047953 / 0.004250 (0.043703) | 0.041126 / 0.037052 (0.004073) | 0.276046 / 0.258489 (0.017557) | 0.297548 / 0.293841 (0.003707) | 0.029308 / 0.128546 (-0.099238) | 0.010516 / 0.075646 (-0.065131) | 0.056982 / 0.419271 (-0.362290) | 0.032922 / 0.043533 (-0.010611) | 0.271342 / 0.255139 (0.016203) | 0.288963 / 0.283200 (0.005763) | 0.019048 / 0.141683 (-0.122635) | 1.130453 / 1.452155 (-0.321702) | 1.206462 / 1.492716 (-0.286254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099249 / 0.018006 (0.081242) | 0.312409 / 0.000490 (0.311919) | 0.000224 / 0.000200 (0.000024) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021992 / 0.037411 (-0.015419) | 0.068377 / 0.014526 (0.053851) | 0.080749 / 0.176557 (-0.095807) | 0.120534 / 0.737135 (-0.616602) | 0.082549 / 0.296338 (-0.213790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299634 / 0.215209 (0.084425) | 2.943496 / 2.077655 (0.865841) | 1.602842 / 1.504120 (0.098722) | 1.462140 / 1.541195 (-0.079055) | 1.511082 / 1.468490 (0.042592) | 0.574148 / 4.584777 (-4.010629) | 2.492158 / 3.745712 (-1.253554) | 2.921695 / 5.269862 (-2.348166) | 1.812416 / 4.565676 (-2.753260) | 0.064145 / 0.424275 (-0.360130) | 0.005133 / 0.007607 (-0.002475) | 0.357935 / 0.226044 (0.131891) | 3.543728 / 2.268929 (1.274800) | 1.948676 / 55.444624 (-53.495948) | 1.664960 / 6.876477 (-5.211517) | 1.678703 / 2.142072 (-0.463370) | 0.645867 / 4.805227 (-4.159360) | 0.117671 / 6.500664 (-6.382993) | 0.040887 / 0.075469 (-0.034582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979127 / 1.841788 (-0.862661) | 12.363904 / 8.074308 (4.289596) | 10.673725 / 10.191392 (0.482333) | 0.143358 / 0.680424 (-0.537066) | 0.015375 / 0.534201 (-0.518825) | 0.287590 / 0.579283 (-0.291694) | 0.284742 / 0.434364 (-0.149622) | 0.326901 / 0.540337 (-0.213437) | 0.443962 / 1.386936 (-0.942974) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#68099ca55294bfc12a34781835dd73c533a764bd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004994 / 0.011353 (-0.006359) | 0.003368 / 0.011008 (-0.007640) | 0.062803 / 0.038508 (0.024295) | 0.050778 / 0.023109 (0.027669) | 0.255955 / 0.275898 (-0.019943) | 0.278215 / 0.323480 (-0.045265) | 0.003801 / 0.007986 (-0.004184) | 0.002703 / 0.004328 (-0.001626) | 0.048369 / 0.004250 (0.044119) | 0.037795 / 0.037052 (0.000743) | 0.255634 / 0.258489 (-0.002855) | 0.284226 / 0.293841 (-0.009615) | 0.027252 / 0.128546 (-0.101294) | 0.010686 / 0.075646 (-0.064961) | 0.206139 / 0.419271 (-0.213133) | 0.035543 / 0.043533 (-0.007990) | 0.257167 / 0.255139 (0.002028) | 0.277784 / 0.283200 (-0.005416) | 0.016938 / 0.141683 (-0.124745) | 1.108595 / 1.452155 (-0.343560) | 1.188542 / 1.492716 (-0.304175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090938 / 0.018006 (0.072932) | 0.298463 / 0.000490 (0.297973) | 0.000203 / 0.000200 (0.000003) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027762 / 0.037411 (-0.009649) | 0.060539 / 0.014526 (0.046014) | 0.075986 / 0.176557 (-0.100570) | 0.133851 / 0.737135 (-0.603285) | 0.074669 / 0.296338 (-0.221670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285614 / 0.215209 (0.070405) | 2.810529 / 2.077655 (0.732874) | 1.537092 / 1.504120 (0.032973) | 1.412211 / 1.541195 (-0.128983) | 1.446395 / 1.468490 (-0.022095) | 0.559008 / 4.584777 (-4.025769) | 2.343445 / 3.745712 (-1.402267) | 2.748113 / 5.269862 (-2.521748) | 1.733593 / 4.565676 (-2.832083) | 0.061720 / 0.424275 (-0.362555) | 0.004930 / 0.007607 (-0.002677) | 0.330646 / 0.226044 (0.104602) | 3.314999 / 2.268929 (1.046071) | 1.854527 / 55.444624 (-53.590098) | 1.605819 / 6.876477 (-5.270657) | 1.591406 / 2.142072 (-0.550667) | 0.624239 / 4.805227 (-4.180988) | 0.115352 / 6.500664 (-6.385312) | 0.041600 / 0.075469 (-0.033869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908608) | 11.456372 / 8.074308 (3.382064) | 10.578042 / 10.191392 (0.386650) | 0.128045 / 0.680424 (-0.552379) | 0.014212 / 0.534201 (-0.519989) | 0.284795 / 0.579283 (-0.294488) | 0.266210 / 0.434364 (-0.168153) | 0.344468 / 0.540337 (-0.195869) | 0.434414 / 1.386936 (-0.952522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005142 / 0.011353 (-0.006211) | 0.003607 / 0.011008 (-0.007401) | 0.048770 / 0.038508 (0.010262) | 0.051147 / 0.023109 (0.028038) | 0.277329 / 0.275898 (0.001430) | 0.300863 / 0.323480 (-0.022617) | 0.004005 / 0.007986 (-0.003980) | 0.002624 / 0.004328 (-0.001705) | 0.047740 / 0.004250 (0.043489) | 0.040811 / 0.037052 (0.003759) | 0.280020 / 0.258489 (0.021531) | 0.303758 / 0.293841 (0.009918) | 0.028273 / 0.128546 (-0.100274) | 0.010379 / 0.075646 (-0.065267) | 0.057503 / 0.419271 (-0.361768) | 0.032717 / 0.043533 (-0.010816) | 0.277560 / 0.255139 (0.022421) | 0.300622 / 0.283200 (0.017422) | 0.018142 / 0.141683 (-0.123541) | 1.121890 / 1.452155 (-0.330265) | 1.251481 / 1.492716 (-0.241235) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091523 / 0.018006 (0.073517) | 0.300173 / 0.000490 (0.299683) | 0.000216 / 0.000200 (0.000016) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026386 / 0.037411 (-0.011025) | 0.078710 / 0.014526 (0.064184) | 0.090594 / 0.176557 (-0.085962) | 0.130623 / 0.737135 (-0.606512) | 0.092637 / 0.296338 (-0.203701) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299427 / 0.215209 (0.084218) | 2.929463 / 2.077655 (0.851808) | 1.608905 / 1.504120 (0.104785) | 1.490863 / 1.541195 (-0.050331) | 1.484286 / 1.468490 (0.015796) | 0.568208 / 4.584777 (-4.016569) | 2.447081 / 3.745712 (-1.298632) | 2.801287 / 5.269862 (-2.468574) | 1.744449 / 4.565676 (-2.821227) | 0.064222 / 0.424275 (-0.360053) | 0.004959 / 0.007607 (-0.002648) | 0.350207 / 0.226044 (0.124162) | 3.471944 / 2.268929 (1.203016) | 1.951715 / 55.444624 (-53.492909) | 1.668764 / 6.876477 (-5.207713) | 1.675322 / 2.142072 (-0.466751) | 0.642217 / 4.805227 (-4.163011) | 0.116776 / 6.500664 (-6.383888) | 0.040812 / 0.075469 (-0.034658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996478 / 1.841788 (-0.845310) | 12.090647 / 8.074308 (4.016339) | 10.723688 / 10.191392 (0.532296) | 0.141770 / 0.680424 (-0.538653) | 0.015578 / 0.534201 (-0.518623) | 0.288236 / 0.579283 (-0.291047) | 0.278542 / 0.434364 (-0.155822) | 0.327411 / 0.540337 (-0.212927) | 0.450309 / 1.386936 (-0.936627) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5dd4698f483d37afe243db0ffae774cbd34a4af4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004967 / 0.011353 (-0.006385) | 0.003382 / 0.011008 (-0.007627) | 0.063436 / 0.038508 (0.024928) | 0.050769 / 0.023109 (0.027659) | 0.254214 / 0.275898 (-0.021684) | 0.272076 / 0.323480 (-0.051404) | 0.003815 / 0.007986 (-0.004170) | 0.002618 / 0.004328 (-0.001711) | 0.049021 / 0.004250 (0.044771) | 0.037329 / 0.037052 (0.000277) | 0.261112 / 0.258489 (0.002623) | 0.284133 / 0.293841 (-0.009708) | 0.026828 / 0.128546 (-0.101719) | 0.010757 / 0.075646 (-0.064889) | 0.208047 / 0.419271 (-0.211225) | 0.035061 / 0.043533 (-0.008472) | 0.250896 / 0.255139 (-0.004243) | 0.273038 / 0.283200 (-0.010162) | 0.016559 / 0.141683 (-0.125124) | 1.128899 / 1.452155 (-0.323255) | 1.188857 / 1.492716 (-0.303860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100121 / 0.018006 (0.082114) | 0.298427 / 0.000490 (0.297937) | 0.000218 / 0.000200 (0.000018) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018369 / 0.037411 (-0.019042) | 0.060425 / 0.014526 (0.045899) | 0.073501 / 0.176557 (-0.103055) | 0.120254 / 0.737135 (-0.616881) | 0.074889 / 0.296338 (-0.221450) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287153 / 0.215209 (0.071944) | 2.797036 / 2.077655 (0.719382) | 1.446216 / 1.504120 (-0.057904) | 1.336015 / 1.541195 (-0.205179) | 1.369841 / 1.468490 (-0.098650) | 0.559424 / 4.584777 (-4.025353) | 2.361344 / 3.745712 (-1.384368) | 2.766619 / 5.269862 (-2.503243) | 1.747235 / 4.565676 (-2.818441) | 0.066243 / 0.424275 (-0.358032) | 0.004974 / 0.007607 (-0.002633) | 0.333565 / 0.226044 (0.107520) | 3.319877 / 2.268929 (1.050948) | 1.798024 / 55.444624 (-53.646601) | 1.495896 / 6.876477 (-5.380580) | 1.529243 / 2.142072 (-0.612830) | 0.636609 / 4.805227 (-4.168618) | 0.116151 / 6.500664 (-6.384514) | 0.041779 / 0.075469 (-0.033690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.952176 / 1.841788 (-0.889611) | 11.559160 / 8.074308 (3.484852) | 10.556771 / 10.191392 (0.365379) | 0.127118 / 0.680424 (-0.553306) | 0.014142 / 0.534201 (-0.520059) | 0.286585 / 0.579283 (-0.292698) | 0.260233 / 0.434364 (-0.174131) | 0.324012 / 0.540337 (-0.216326) | 0.435131 / 1.386936 (-0.951805) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005171 / 0.011353 (-0.006182) | 0.003402 / 0.011008 (-0.007607) | 0.048826 / 0.038508 (0.010318) | 0.050455 / 0.023109 (0.027346) | 0.272120 / 0.275898 (-0.003778) | 0.290404 / 0.323480 (-0.033076) | 0.003986 / 0.007986 (-0.003999) | 0.002569 / 0.004328 (-0.001760) | 0.047845 / 0.004250 (0.043595) | 0.040203 / 0.037052 (0.003150) | 0.278263 / 0.258489 (0.019774) | 0.299255 / 0.293841 (0.005414) | 0.028643 / 0.128546 (-0.099903) | 0.010584 / 0.075646 (-0.065062) | 0.056921 / 0.419271 (-0.362351) | 0.032362 / 0.043533 (-0.011171) | 0.274010 / 0.255139 (0.018871) | 0.288601 / 0.283200 (0.005401) | 0.017856 / 0.141683 (-0.123827) | 1.154112 / 1.452155 (-0.298043) | 1.216288 / 1.492716 (-0.276428) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091399 / 0.018006 (0.073392) | 0.299966 / 0.000490 (0.299477) | 0.000218 / 0.000200 (0.000018) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021728 / 0.037411 (-0.015683) | 0.068285 / 0.014526 (0.053759) | 0.081767 / 0.176557 (-0.094789) | 0.120000 / 0.737135 (-0.617135) | 0.082149 / 0.296338 (-0.214189) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289625 / 0.215209 (0.074416) | 2.835114 / 2.077655 (0.757460) | 1.583207 / 1.504120 (0.079087) | 1.465251 / 1.541195 (-0.075944) | 1.480691 / 1.468490 (0.012200) | 0.569103 / 4.584777 (-4.015674) | 2.416981 / 3.745712 (-1.328731) | 2.761746 / 5.269862 (-2.508115) | 1.720055 / 4.565676 (-2.845621) | 0.063349 / 0.424275 (-0.360926) | 0.004931 / 0.007607 (-0.002676) | 0.343658 / 0.226044 (0.117614) | 3.362996 / 2.268929 (1.094068) | 1.948088 / 55.444624 (-53.496536) | 1.659504 / 6.876477 (-5.216973) | 1.660359 / 2.142072 (-0.481713) | 0.647871 / 4.805227 (-4.157356) | 0.117395 / 6.500664 (-6.383269) | 0.041049 / 0.075469 (-0.034420) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.953971 / 1.841788 (-0.887817) | 12.076998 / 8.074308 (4.002690) | 10.549021 / 10.191392 (0.357629) | 0.130026 / 0.680424 (-0.550398) | 0.015697 / 0.534201 (-0.518504) | 0.287125 / 0.579283 (-0.292158) | 0.298402 / 0.434364 (-0.135962) | 0.326005 / 0.540337 (-0.214332) | 0.444065 / 1.386936 (-0.942871) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf86d48792f585bf802bb2ff70e0d9c3a4de4bcf \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005053 / 0.011353 (-0.006300) | 0.003537 / 0.011008 (-0.007472) | 0.062923 / 0.038508 (0.024415) | 0.053796 / 0.023109 (0.030687) | 0.242523 / 0.275898 (-0.033375) | 0.264014 / 0.323480 (-0.059466) | 0.002879 / 0.007986 (-0.005106) | 0.003273 / 0.004328 (-0.001055) | 0.048735 / 0.004250 (0.044484) | 0.037541 / 0.037052 (0.000488) | 0.248587 / 0.258489 (-0.009902) | 0.275531 / 0.293841 (-0.018310) | 0.027215 / 0.128546 (-0.101331) | 0.010466 / 0.075646 (-0.065180) | 0.206508 / 0.419271 (-0.212763) | 0.035606 / 0.043533 (-0.007927) | 0.251044 / 0.255139 (-0.004095) | 0.267183 / 0.283200 (-0.016016) | 0.018357 / 0.141683 (-0.123326) | 1.083513 / 1.452155 (-0.368642) | 1.152988 / 1.492716 (-0.339728) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091749 / 0.018006 (0.073742) | 0.299946 / 0.000490 (0.299456) | 0.000212 / 0.000200 (0.000013) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018300 / 0.037411 (-0.019111) | 0.060691 / 0.014526 (0.046166) | 0.072998 / 0.176557 (-0.103559) | 0.120581 / 0.737135 (-0.616554) | 0.073912 / 0.296338 (-0.222427) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277602 / 0.215209 (0.062393) | 2.719181 / 2.077655 (0.641526) | 1.450894 / 1.504120 (-0.053226) | 1.314344 / 1.541195 (-0.226851) | 1.351996 / 1.468490 (-0.116494) | 0.586231 / 4.584777 (-3.998546) | 2.349746 / 3.745712 (-1.395967) | 2.810060 / 5.269862 (-2.459802) | 1.761362 / 4.565676 (-2.804314) | 0.062535 / 0.424275 (-0.361740) | 0.004918 / 0.007607 (-0.002689) | 0.336091 / 0.226044 (0.110047) | 3.238139 / 2.268929 (0.969211) | 1.769734 / 55.444624 (-53.674890) | 1.505332 / 6.876477 (-5.371145) | 1.527875 / 2.142072 (-0.614198) | 0.640194 / 4.805227 (-4.165033) | 0.116567 / 6.500664 (-6.384097) | 0.042464 / 0.075469 (-0.033005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.930919 / 1.841788 (-0.910869) | 11.462498 / 8.074308 (3.388190) | 10.575359 / 10.191392 (0.383967) | 0.130567 / 0.680424 (-0.549857) | 0.014203 / 0.534201 (-0.519998) | 0.286944 / 0.579283 (-0.292339) | 0.264706 / 0.434364 (-0.169658) | 0.324820 / 0.540337 (-0.215517) | 0.434579 / 1.386936 (-0.952357) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005164 / 0.011353 (-0.006189) | 0.003442 / 0.011008 (-0.007567) | 0.050146 / 0.038508 (0.011638) | 0.050800 / 0.023109 (0.027691) | 0.263405 / 0.275898 (-0.012493) | 0.284876 / 0.323480 (-0.038604) | 0.004011 / 0.007986 (-0.003975) | 0.002602 / 0.004328 (-0.001726) | 0.046742 / 0.004250 (0.042491) | 0.040393 / 0.037052 (0.003341) | 0.265052 / 0.258489 (0.006563) | 0.294217 / 0.293841 (0.000377) | 0.028429 / 0.128546 (-0.100118) | 0.010418 / 0.075646 (-0.065228) | 0.057285 / 0.419271 (-0.361987) | 0.032137 / 0.043533 (-0.011396) | 0.265867 / 0.255139 (0.010728) | 0.284764 / 0.283200 (0.001564) | 0.017448 / 0.141683 (-0.124235) | 1.172830 / 1.452155 (-0.279325) | 1.223982 / 1.492716 (-0.268735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091859 / 0.018006 (0.073853) | 0.285421 / 0.000490 (0.284931) | 0.000220 / 0.000200 (0.000020) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021620 / 0.037411 (-0.015792) | 0.069058 / 0.014526 (0.054532) | 0.082560 / 0.176557 (-0.093997) | 0.119511 / 0.737135 (-0.617624) | 0.082318 / 0.296338 (-0.214021) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291499 / 0.215209 (0.076290) | 2.863352 / 2.077655 (0.785698) | 1.557242 / 1.504120 (0.053122) | 1.430170 / 1.541195 (-0.111024) | 1.432850 / 1.468490 (-0.035640) | 0.559716 / 4.584777 (-4.025061) | 2.385405 / 3.745712 (-1.360307) | 2.748938 / 5.269862 (-2.520924) | 1.740802 / 4.565676 (-2.824874) | 0.061811 / 0.424275 (-0.362465) | 0.005174 / 0.007607 (-0.002433) | 0.348687 / 0.226044 (0.122642) | 3.420120 / 2.268929 (1.151191) | 1.918278 / 55.444624 (-53.526346) | 1.631559 / 6.876477 (-5.244918) | 1.635850 / 2.142072 (-0.506222) | 0.644144 / 4.805227 (-4.161083) | 0.115823 / 6.500664 (-6.384841) | 0.041255 / 0.075469 (-0.034214) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960066 / 1.841788 (-0.881722) | 12.011372 / 8.074308 (3.937064) | 10.580532 / 10.191392 (0.389140) | 0.134763 / 0.680424 (-0.545661) | 0.017027 / 0.534201 (-0.517174) | 0.290484 / 0.579283 (-0.288799) | 0.285171 / 0.434364 (-0.149193) | 0.322453 / 0.540337 (-0.217884) | 0.438088 / 1.386936 (-0.948848) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b3fc42882a2d84d7482c27063f1e19539e99b9d3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005212 / 0.011353 (-0.006141) | 0.003440 / 0.011008 (-0.007568) | 0.063612 / 0.038508 (0.025104) | 0.049070 / 0.023109 (0.025961) | 0.269748 / 0.275898 (-0.006150) | 0.283270 / 0.323480 (-0.040210) | 0.002892 / 0.007986 (-0.005094) | 0.002693 / 0.004328 (-0.001635) | 0.049710 / 0.004250 (0.045459) | 0.036707 / 0.037052 (-0.000345) | 0.299035 / 0.258489 (0.040546) | 0.296443 / 0.293841 (0.002602) | 0.028095 / 0.128546 (-0.100451) | 0.010682 / 0.075646 (-0.064964) | 0.213914 / 0.419271 (-0.205358) | 0.036210 / 0.043533 (-0.007323) | 0.235720 / 0.255139 (-0.019419) | 0.252687 / 0.283200 (-0.030512) | 0.016985 / 0.141683 (-0.124698) | 1.099024 / 1.452155 (-0.353130) | 1.162970 / 1.492716 (-0.329746) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093114 / 0.018006 (0.075108) | 0.305168 / 0.000490 (0.304678) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018370 / 0.037411 (-0.019041) | 0.060534 / 0.014526 (0.046008) | 0.073960 / 0.176557 (-0.102596) | 0.120325 / 0.737135 (-0.616810) | 0.073754 / 0.296338 (-0.222585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284244 / 0.215209 (0.069035) | 2.756854 / 2.077655 (0.679199) | 1.477304 / 1.504120 (-0.026816) | 1.374635 / 1.541195 (-0.166560) | 1.383284 / 1.468490 (-0.085206) | 0.564656 / 4.584777 (-4.020121) | 2.361719 / 3.745712 (-1.383993) | 2.794822 / 5.269862 (-2.475039) | 1.742981 / 4.565676 (-2.822696) | 0.063443 / 0.424275 (-0.360832) | 0.004952 / 0.007607 (-0.002655) | 0.342058 / 0.226044 (0.116014) | 3.351093 / 2.268929 (1.082164) | 1.857375 / 55.444624 (-53.587250) | 1.541680 / 6.876477 (-5.334797) | 1.580147 / 2.142072 (-0.561926) | 0.645216 / 4.805227 (-4.160012) | 0.118768 / 6.500664 (-6.381896) | 0.042115 / 0.075469 (-0.033354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.925845 / 1.841788 (-0.915943) | 11.444147 / 8.074308 (3.369839) | 10.291297 / 10.191392 (0.099905) | 0.128129 / 0.680424 (-0.552295) | 0.013774 / 0.534201 (-0.520427) | 0.289278 / 0.579283 (-0.290005) | 0.262353 / 0.434364 (-0.172011) | 0.328517 / 0.540337 (-0.211820) | 0.436050 / 1.386936 (-0.950886) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003691 / 0.011008 (-0.007318) | 0.049361 / 0.038508 (0.010853) | 0.054245 / 0.023109 (0.031136) | 0.274433 / 0.275898 (-0.001465) | 0.285648 / 0.323480 (-0.037832) | 0.004080 / 0.007986 (-0.003906) | 0.002666 / 0.004328 (-0.001663) | 0.047539 / 0.004250 (0.043288) | 0.041001 / 0.037052 (0.003948) | 0.296018 / 0.258489 (0.037529) | 0.294542 / 0.293841 (0.000701) | 0.030546 / 0.128546 (-0.098001) | 0.010556 / 0.075646 (-0.065090) | 0.058146 / 0.419271 (-0.361126) | 0.033407 / 0.043533 (-0.010126) | 0.263977 / 0.255139 (0.008838) | 0.286228 / 0.283200 (0.003028) | 0.018088 / 0.141683 (-0.123595) | 1.121295 / 1.452155 (-0.330860) | 1.182183 / 1.492716 (-0.310533) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.104540 / 0.018006 (0.086534) | 0.303494 / 0.000490 (0.303004) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021274 / 0.037411 (-0.016137) | 0.070146 / 0.014526 (0.055621) | 0.080343 / 0.176557 (-0.096213) | 0.120017 / 0.737135 (-0.617119) | 0.081303 / 0.296338 (-0.215036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294390 / 0.215209 (0.079181) | 2.883366 / 2.077655 (0.805711) | 1.564629 / 1.504120 (0.060509) | 1.432633 / 1.541195 (-0.108562) | 1.438786 / 1.468490 (-0.029704) | 0.569663 / 4.584777 (-4.015114) | 2.448691 / 3.745712 (-1.297021) | 2.817010 / 5.269862 (-2.452851) | 1.757274 / 4.565676 (-2.808402) | 0.064147 / 0.424275 (-0.360129) | 0.004910 / 0.007607 (-0.002697) | 0.344062 / 0.226044 (0.118018) | 3.394223 / 2.268929 (1.125294) | 1.927139 / 55.444624 (-53.517485) | 1.624983 / 6.876477 (-5.251494) | 1.629076 / 2.142072 (-0.512996) | 0.654239 / 4.805227 (-4.150988) | 0.117309 / 6.500664 (-6.383355) | 0.041067 / 0.075469 (-0.034402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993184 / 1.841788 (-0.848604) | 11.969985 / 8.074308 (3.895677) | 10.363356 / 10.191392 (0.171964) | 0.130708 / 0.680424 (-0.549716) | 0.015577 / 0.534201 (-0.518624) | 0.289579 / 0.579283 (-0.289704) | 0.274875 / 0.434364 (-0.159488) | 0.326736 / 0.540337 (-0.213601) | 0.442770 / 1.386936 (-0.944166) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#796a47e388a5c5711a95bd649648608c18219ac5 \"CML watermark\")\n", "Getting the same windows error as in my other PR. I couldn't reproduce on my windows machine though 🧐 ", "`DataFilesList` is a list so we expect to be able to get its length with zero cost, which wouldn't be the case if we make it lazy no ? ", "But we don't call `len` on it, do we? And I couldn't find an instance of `DataFilesList` being used in GitHub's public repos.", "`DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns", "Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)", "> `DataFilesDict` is used in some repositories in dataset scripts when people want to list files from a repo using glob patterns\r\n\r\nWould be interesting to know how often these scripts call `len` or do random access on `DataFilesList`.\r\n\r\nStill, I think we should opt for a solution that makes more sense for us. To avoid the breaking change, we can define a `BuilderConfig.data_files` property that resolves this iterable. \r\n\r\n> Also making DataFilesList lazy would require to make the pickling more complex, since we don't want to resolve the data files when pickling. At the same time we want to get different hashes if the data files and origin metadata are different so revolving the patterns is needed in that case (we hash the data files when creating the config_id, used in the cache)\r\n\r\nThe `BuilderConfig.data_files` property suggested above should address this, no? \r\n\r\nI think we should be more careful not to make our API needlessly complex because of the YAML README feature. And if this can't be avoided, we should probably refactor the builder API.", "> The BuilderConfig.data_files property suggested above should address this, no?\r\n\r\nThat works indeed ! let me try something", "Implementing lazy DataFilesList and .data_files brings more complexity (less readable, more bad side effects) so I think the current solution is the best one", "I opened https://github.com/huggingface/datasets/pull/6493 to continue this and fix conflicts with https://github.com/huggingface/datasets/pull/6459" ]
"2023-11-29T13:18:44Z"
"2023-12-12T23:30:33Z"
null
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6458.diff", "html_url": "https://github.com/huggingface/datasets/pull/6458", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6458" }
Related to discussion at https://github.com/huggingface/datasets/pull/6255 this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be up to 100x faster. This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts. The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6458/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2964/comments
https://api.github.com/repos/huggingface/datasets/issues/2964/events
https://github.com/huggingface/datasets/issues/2964
1,006,605,904
I_kwDODunzps47_5ZQ
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_preds\r\n predictions = np.argmax(logits, axis=1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n```\r\n\r\nIt fails when the evaluation metrics are computed in the `Trainer` with the same error code `AttributeError: 'float' object has no attribute 'item'` as the output is not a `numpy.float64`... Maybe I'm doing something wrong, not sure!", "Ok after some more experiments I've realized that it's an issue from my side, at first I thought it was due to `fp16=True` in `TrainingArguments`, but in the end that may not be the issue, so I'll close this for now and check later, since the mistake is on my side :weary: Sorry for the inconvenience!" ]
"2021-09-24T15:55:21Z"
"2021-09-25T08:06:07Z"
"2021-09-25T08:06:07Z"
CONTRIBUTOR
null
null
null
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required). ## Steps to reproduce the bug ```python import torch predictions = torch.ones((10,)) references = torch.zeros((10,)) from datasets import load_metric METRIC = load_metric("matthews_correlation") result = METRIC.compute(predictions=predictions, references=references) ``` ## Expected results We should expect a Python `dict` as it follows: ``` { "matthews_correlation": float() } ``` as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail. ## Actual results ``` Traceback (most recent call last): File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main app() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__ return get_command(self)(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper return callback(**use_params) # type: ignore File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train metrics = trainer.evaluate() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate output = eval_loop( File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids) File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute "matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(), AttributeError: 'float' object has no attribute 'item' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2964/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/76
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/76/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/76/comments
https://api.github.com/repos/huggingface/datasets/issues/76/events
https://github.com/huggingface/datasets/pull/76
616,579,228
MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2
76
pin flake 8
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-12T11:25:29Z"
"2020-05-12T11:27:35Z"
"2020-05-12T11:27:34Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/76.diff", "html_url": "https://github.com/huggingface/datasets/pull/76", "merged_at": "2020-05-12T11:27:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/76.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/76" }
Flake 8's new version does not like our format. Pinning the version for now.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/76/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/76/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3722/comments
https://api.github.com/repos/huggingface/datasets/issues/3722/events
https://github.com/huggingface/datasets/pull/3722
1,138,770,211
PR_kwDODunzps4y3NrP
3,722
added electricity load diagram dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kashif", "id": 8100, "login": "kashif", "node_id": "MDQ6VXNlcjgxMDA=", "organizations_url": "https://api.github.com/users/kashif/orgs", "received_events_url": "https://api.github.com/users/kashif/received_events", "repos_url": "https://api.github.com/users/kashif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "type": "User", "url": "https://api.github.com/users/kashif" }
[]
closed
false
null
[]
null
[]
"2022-02-15T14:29:29Z"
"2022-02-16T18:53:21Z"
"2022-02-16T18:48:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3722.diff", "html_url": "https://github.com/huggingface/datasets/pull/3722", "merged_at": "2022-02-16T18:48:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3722.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3722" }
Initial Electricity Load Diagram time series dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3722/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1329/comments
https://api.github.com/repos/huggingface/datasets/issues/1329/events
https://github.com/huggingface/datasets/pull/1329
759,654,174
MDExOlB1bGxSZXF1ZXN0NTM0NjIxNzg0
1,329
Add yoruba ner corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dadelani", "id": 23586676, "login": "dadelani", "node_id": "MDQ6VXNlcjIzNTg2Njc2", "organizations_url": "https://api.github.com/users/dadelani/orgs", "received_events_url": "https://api.github.com/users/dadelani/received_events", "repos_url": "https://api.github.com/users/dadelani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "type": "User", "url": "https://api.github.com/users/dadelani" }
[]
closed
false
null
[]
null
[]
"2020-12-08T17:54:00Z"
"2020-12-08T23:11:12Z"
"2020-12-08T23:11:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1329.diff", "html_url": "https://github.com/huggingface/datasets/pull/1329", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1329.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1329" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1329/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1329/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1483/comments
https://api.github.com/repos/huggingface/datasets/issues/1483/events
https://github.com/huggingface/datasets/pull/1483
762,712,337
MDExOlB1bGxSZXF1ZXN0NTM3MjMxMzQ4
1,483
Added Times of India News Headlines Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanmoyio", "id": 33005287, "login": "tanmoyio", "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "repos_url": "https://api.github.com/users/tanmoyio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "type": "User", "url": "https://api.github.com/users/tanmoyio" }
[]
closed
false
null
[]
null
[ "@lhoestq @abhishekkrthakur what happened here ?\r\n", "@lhoestq everything alright here ?", "@tanmoyio please have patience. @lhoestq has to look at 150+ PRs and it may take time. The PR looks good to me but we wait for his confirmation :) 🤗 " ]
"2020-12-11T18:12:38Z"
"2020-12-14T18:08:08Z"
"2020-12-14T18:08:08Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1483.diff", "html_url": "https://github.com/huggingface/datasets/pull/1483", "merged_at": "2020-12-14T18:08:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1483.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1483" }
Dataset name: Times of India News Headlines link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DPQMQH
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1483/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4443/comments
https://api.github.com/repos/huggingface/datasets/issues/4443/events
https://github.com/huggingface/datasets/issues/4443
1,259,606,334
I_kwDODunzps5LFBE-
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
{ "avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4", "events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}", "followers_url": "https://api.github.com/users/ZYMXIXI/followers", "following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}", "gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZYMXIXI", "id": 32382826, "login": "ZYMXIXI", "node_id": "MDQ6VXNlcjMyMzgyODI2", "organizations_url": "https://api.github.com/users/ZYMXIXI/orgs", "received_events_url": "https://api.github.com/users/ZYMXIXI/received_events", "repos_url": "https://api.github.com/users/ZYMXIXI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions", "type": "User", "url": "https://api.github.com/users/ZYMXIXI" }
[]
open
false
null
[]
null
[ "If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.", "I'm having a look.", "Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```", "Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km", "Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ", "I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker ", "can we close this issue and followup in the discussion?" ]
"2022-06-03T08:17:16Z"
"2023-09-25T12:15:08Z"
null
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4443/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/720/comments
https://api.github.com/repos/huggingface/datasets/issues/720/events
https://github.com/huggingface/datasets/issues/720
716,581,266
MDU6SXNzdWU3MTY1ODEyNjY=
720
OSError: Cannot find data file when not using the dummy dataset in RAG
{ "avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4", "events_url": "https://api.github.com/users/josemlopez/events{/privacy}", "followers_url": "https://api.github.com/users/josemlopez/followers", "following_url": "https://api.github.com/users/josemlopez/following{/other_user}", "gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/josemlopez", "id": 4112135, "login": "josemlopez", "node_id": "MDQ6VXNlcjQxMTIxMzU=", "organizations_url": "https://api.github.com/users/josemlopez/orgs", "received_events_url": "https://api.github.com/users/josemlopez/received_events", "repos_url": "https://api.github.com/users/josemlopez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions", "type": "User", "url": "https://api.github.com/users/josemlopez" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.", "An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ", "Closing this one. Feel free to re-open if you have other questions about this issue" ]
"2020-10-07T14:27:13Z"
"2020-12-23T14:04:31Z"
"2020-12-23T14:04:31Z"
NONE
null
null
null
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour: ``` import os os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) ``` Plese note that I'm using the whole dataset: **use_dummy_dataset=False** After around 4 hours (downloading and some other things) this is returned: ``` Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-10-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` Thanks
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/720/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4179/comments
https://api.github.com/repos/huggingface/datasets/issues/4179/events
https://github.com/huggingface/datasets/issues/4179
1,208,001,118
I_kwDODunzps5IAKJe
4,179
Dataset librispeech_asr fails to load
{ "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertz", "id": 59132, "login": "albertz", "node_id": "MDQ6VXNlcjU5MTMy", "organizations_url": "https://api.github.com/users/albertz/orgs", "received_events_url": "https://api.github.com/users/albertz/received_events", "repos_url": "https://api.github.com/users/albertz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "type": "User", "url": "https://api.github.com/users/albertz" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "@patrickvonplaten Hi! I saw that you prepared this? :)", "Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885", "Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ", "Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n", "If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate", "Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n", "cc @lhoestq FYI maybe the docs can be improved here", "Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?", "Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like", "Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183", "> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n", "+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs", "Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)", "I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n", "Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )", "But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```", "Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?", "Hmm, I don't really see how that's possible: https://github.com/huggingface/datasets/blob/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc/datasets/librispeech_asr/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```", "@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https://github.com/huggingface/datasets/pull/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.", "> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L121-L241).\r\n[Here ](https://huggingface.co/datasets/andreagasparini/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https://huggingface.co/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}/librispeech_asr\")\r\n```", "Fixed by https://github.com/huggingface/datasets/pull/4184" ]
"2022-04-19T08:45:48Z"
"2022-07-27T16:10:00Z"
"2022-07-27T16:10:00Z"
NONE
null
null
null
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other). However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want. Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training. ## Actual results ``` ... File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators line: archive_path = dl_manager.download(_DL_URLS[self.config.name]) locals: archive_path = <not found> dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160> dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>> _DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'... self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310> self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None) self.config.name = <local> 'default', len = 7 KeyError: 'default' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31 - Python version: 3.9.9 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4179/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3071/comments
https://api.github.com/repos/huggingface/datasets/issues/3071/events
https://github.com/huggingface/datasets/issues/3071
1,024,893,493
I_kwDODunzps49FqI1
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
{ "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zixiliuUSC", "id": 49173327, "login": "zixiliuUSC", "node_id": "MDQ6VXNlcjQ5MTczMzI3", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "type": "User", "url": "https://api.github.com/users/zixiliuUSC" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
"2021-10-13T07:32:10Z"
"2021-10-13T08:27:04Z"
"2021-10-13T08:27:03Z"
NONE
null
null
null
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3071/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6481/comments
https://api.github.com/repos/huggingface/datasets/issues/6481/events
https://github.com/huggingface/datasets/issues/6481
2,032,650,003
I_kwDODunzps55J8cT
6,481
using torchrun, save_to_disk suddenly shows SIGTERM
{ "avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4", "events_url": "https://api.github.com/users/Ariya12138/events{/privacy}", "followers_url": "https://api.github.com/users/Ariya12138/followers", "following_url": "https://api.github.com/users/Ariya12138/following{/other_user}", "gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ariya12138", "id": 85916625, "login": "Ariya12138", "node_id": "MDQ6VXNlcjg1OTE2NjI1", "organizations_url": "https://api.github.com/users/Ariya12138/orgs", "received_events_url": "https://api.github.com/users/Ariya12138/received_events", "repos_url": "https://api.github.com/users/Ariya12138/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions", "type": "User", "url": "https://api.github.com/users/Ariya12138" }
[]
open
false
null
[]
null
[]
"2023-12-08T13:22:03Z"
"2023-12-08T13:22:03Z"
null
NONE
null
null
null
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard. WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967. ### Steps to reproduce the bug ds_shard = ds_shard.map(map_fn, *args, **kwargs) ds_shard.save_to_disk(ds_shard_filepaths[rank]) Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python Traceback (most recent call last): File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ========================================================== run.py FAILED ---------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-12-08_20:09:04 rank : 0 (local_rank: 0) exitcode : -7 (pid: 2224967) error_file: <N/A> traceback : Signal 7 (SIGBUS) received by PID 2224967 ### Expected behavior I hope it can save successfully without any issues, but it seems there is a problem. ### Environment info `datasets` version: 2.14.6 - Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - PyArrow version: 14.0.0 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6481/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4206/comments
https://api.github.com/repos/huggingface/datasets/issues/4206/events
https://github.com/huggingface/datasets/pull/4206
1,212,715,581
PR_kwDODunzps42pJQW
4,206
Add Nerval Metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/49372461?v=4", "events_url": "https://api.github.com/users/mdadda/events{/privacy}", "followers_url": "https://api.github.com/users/mdadda/followers", "following_url": "https://api.github.com/users/mdadda/following{/other_user}", "gists_url": "https://api.github.com/users/mdadda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mdadda", "id": 49372461, "login": "mdadda", "node_id": "MDQ6VXNlcjQ5MzcyNDYx", "organizations_url": "https://api.github.com/users/mdadda/orgs", "received_events_url": "https://api.github.com/users/mdadda/received_events", "repos_url": "https://api.github.com/users/mdadda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mdadda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdadda/subscriptions", "type": "User", "url": "https://api.github.com/users/mdadda" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
closed
false
null
[]
null
[ "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
"2022-04-22T19:45:00Z"
"2023-07-11T09:34:56Z"
"2023-07-11T09:34:55Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4206.diff", "html_url": "https://github.com/huggingface/datasets/pull/4206", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4206.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4206" }
This PR adds readme.md and ner_val.py to metrics. Nerval is a python package that helps evaluate NER models. It creates classification report and confusion matrix at entity level.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4206/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5541/comments
https://api.github.com/repos/huggingface/datasets/issues/5541/events
https://github.com/huggingface/datasets/issues/5541
1,588,633,555
I_kwDODunzps5esJ_T
5,541
Flattening indices in selected datasets is extremely inefficient
{ "avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4", "events_url": "https://api.github.com/users/marioga/events{/privacy}", "followers_url": "https://api.github.com/users/marioga/followers", "following_url": "https://api.github.com/users/marioga/following{/other_user}", "gists_url": "https://api.github.com/users/marioga/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marioga", "id": 6591505, "login": "marioga", "node_id": "MDQ6VXNlcjY1OTE1MDU=", "organizations_url": "https://api.github.com/users/marioga/orgs", "received_events_url": "https://api.github.com/users/marioga/received_events", "repos_url": "https://api.github.com/users/marioga/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marioga/subscriptions", "type": "User", "url": "https://api.github.com/users/marioga" }
[]
closed
false
null
[]
null
[ "Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0.014899 s\r\nNum chunks for original ds after reloading: 5000\r\n\r\nNum chunks for selected ds: 1\r\nflatten_indices -- RAM memory used: 42.546875 MB -- Total time: 23.735089 s\r\nNum chunks for selected ds after flattening: 5000\r\n\r\nSelected ds save/load\r\nsave_to_disk -- RAM memory used: 0.0 MB -- Total time: 0.287112 s\r\nload_from_disk -- RAM memory used: 38.84375 MB -- Total time: 0.014772 s\r\nNum chunks for selected ds after reloading: 5000\r\n```", "Wouahouh super cool @marioga thanks a lot!", "We just released `datasets==2.10.0` with this big improvement, thanks again @marioga " ]
"2023-02-17T01:52:24Z"
"2023-02-22T13:15:20Z"
"2023-02-17T11:12:33Z"
CONTRIBUTOR
null
null
null
### Describe the bug If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. This is extremely inefficient and slows down the operations on the flat dataset, e.g., saving/loading the dataset to disk becomes really slow. Perhaps more importantly, loading the dataset back from disk basically loads the whole table into RAM, as it cannot take advantage of memory mapping. ### Steps to reproduce the bug The following script reproduces the issue: ```python import gc import os import psutil import tempfile import time from datasets import Dataset DATASET_SIZE = 5000000 def profile(func): def wrapper(*args, **kwargs): mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) start = time.time() # Run function here out = func(*args, **kwargs) end = time.time() mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024) print(f"{func.__name__} -- RAM memory used: {mem_after - mem_before} MB -- Total time: {end - start:.6f} s") return out return wrapper def main(): ds = Dataset.from_list([{'col': i} for i in range(DATASET_SIZE)]) print(f"Num chunks for original ds: {ds.data['col'].num_chunks}") with tempfile.TemporaryDirectory() as tmpdir: path1 = os.path.join(tmpdir, 'ds1') print("Original ds save/load") profile(ds.save_to_disk)(path1) ds_loaded = profile(Dataset.load_from_disk)(path1) print(f"Num chunks for original ds after reloading: {ds_loaded.data['col'].num_chunks}") print("") ds_select = ds.select(reversed(range(len(ds)))) print(f"Num chunks for selected ds: {ds_select.data['col'].num_chunks}") del ds del ds_loaded gc.collect() # This would happen anyway when we call save_to_disk ds_select = profile(ds_select.flatten_indices)() print(f"Num chunks for selected ds after flattening: {ds_select.data['col'].num_chunks}") print("") path2 = os.path.join(tmpdir, 'ds2') print("Selected ds save/load") profile(ds_select.save_to_disk)(path2) del ds_select gc.collect() ds_select_loaded = profile(Dataset.load_from_disk)(path2) print(f"Num chunks for selected ds after reloading: {ds_select_loaded.data['col'].num_chunks}") if __name__ == '__main__': main() ``` Sample result: ``` Num chunks for original ds: 1 Original ds save/load save_to_disk -- RAM memory used: 0.515625 MB -- Total time: 0.253888 s load_from_disk -- RAM memory used: 42.765625 MB -- Total time: 0.015176 s Num chunks for original ds after reloading: 5000 Num chunks for selected ds: 1 flatten_indices -- RAM memory used: 4852.609375 MB -- Total time: 46.116774 s Num chunks for selected ds after flattening: 5000000 Selected ds save/load save_to_disk -- RAM memory used: 1326.65625 MB -- Total time: 42.309825 s load_from_disk -- RAM memory used: 2085.953125 MB -- Total time: 11.659137 s Num chunks for selected ds after reloading: 5000000 ``` ### Expected behavior Saving/loading the dataset should be much faster and consume almost no extra memory thanks to pyarrow memory mapping. ### Environment info - `datasets` version: 2.9.1.dev0 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5541/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5459/comments
https://api.github.com/repos/huggingface/datasets/issues/5459/events
https://github.com/huggingface/datasets/pull/5459
1,555,367,504
PR_kwDODunzps5Icjwe
5,459
Disable aiohttp requoting of redirection URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ", "The lib `requests` does not perform that requote on redirect URLs.", "Indeed, the `requests` library does perform a requoting, but this does not unquote `%27`:\r\n```python\r\nIn [1]: from requests.utils import requote_uri\r\n\r\nIn [2]: url = \"https://netloc/path?param=param%27%27value\"\r\n\r\nIn [3]: url\r\nOut[3]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [4]: requote_uri(url)\r\nOut[4]: 'https://netloc/path?param=param%27%27value'\r\n```\r\n\r\nHowever, the `aiohttp` library uses `yarl.ULR` and this does unquote `%27`:\r\n```python\r\nIn [5]: from yarl import URL\r\n\r\nIn [6]: url\r\nOut[6]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [7]: str(URL(url))\r\nOut[7]: \"https://netloc/path?param=param''value\"\r\n```\r\n\r\nIf we pass `requote_redirect_url=False` to `aiohttp`, then it passes `encoded=True` to `yarl.ULR`: https://github.com/aio-libs/aiohttp/blob/4635161ee8e7ad321cca46e01ce5bfeb1ad8bf26/aiohttp/client.py#L578-L580\r\n```python\r\nparsed_url = URL(\r\n r_url, encoded=not self._requote_redirect_url\r\n)\r\n```\r\nwhich does not unquote `%27`:\r\n```python\r\nIn [8]: url\r\nOut[8]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [9]: str(URL(url, encoded=True))\r\nOut[9]: 'https://netloc/path?param=param%27%27value'\r\n```", "See the issues we opened in the respective libraries:\r\n- aiohttp\r\n - aio-libs/aiohttp#7183\r\n- requests\r\n - psf/requests#6341", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012399 / 0.011353 (0.001047) | 0.006388 / 0.011008 (-0.004620) | 0.134173 / 0.038508 (0.095665) | 0.037059 / 0.023109 (0.013949) | 0.420697 / 0.275898 (0.144799) | 0.473981 / 0.323480 (0.150502) | 0.009857 / 0.007986 (0.001871) | 0.004791 / 0.004328 (0.000463) | 0.106886 / 0.004250 (0.102636) | 0.044871 / 0.037052 (0.007818) | 0.429843 / 0.258489 (0.171354) | 0.461569 / 0.293841 (0.167728) | 0.057285 / 0.128546 (-0.071261) | 0.018809 / 0.075646 (-0.056837) | 0.432613 / 0.419271 (0.013342) | 0.058086 / 0.043533 (0.014553) | 0.413064 / 0.255139 (0.157925) | 0.444407 / 0.283200 (0.161207) | 0.119102 / 0.141683 (-0.022581) | 1.875954 / 1.452155 (0.423799) | 1.916392 / 1.492716 (0.423676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267489 / 0.018006 (0.249483) | 0.567554 / 0.000490 (0.567064) | 0.005901 / 0.000200 (0.005701) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031248 / 0.037411 (-0.006164) | 0.123014 / 0.014526 (0.108489) | 0.140001 / 0.176557 (-0.036556) | 0.191476 / 0.737135 (-0.545659) | 0.141687 / 0.296338 (-0.154652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.637481 / 0.215209 (0.422272) | 6.255969 / 2.077655 (4.178314) | 2.559811 / 1.504120 (1.055691) | 2.118154 / 1.541195 (0.576960) | 2.079487 / 1.468490 (0.610997) | 1.201079 / 4.584777 (-3.383698) | 5.592625 / 3.745712 (1.846913) | 5.143344 / 5.269862 (-0.126517) | 2.764716 / 4.565676 (-1.800960) | 0.142539 / 0.424275 (-0.281736) | 0.015541 / 0.007607 (0.007934) | 0.771407 / 0.226044 (0.545363) | 7.631657 / 2.268929 (5.362728) | 3.279684 / 55.444624 (-52.164940) | 2.587566 / 6.876477 (-4.288911) | 2.624622 / 2.142072 (0.482549) | 1.427878 / 4.805227 (-3.377350) | 0.257759 / 6.500664 (-6.242906) | 0.078616 / 0.075469 (0.003147) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609305 / 1.841788 (-0.232483) | 18.258792 / 8.074308 (10.184484) | 20.345242 / 10.191392 (10.153850) | 0.267366 / 0.680424 (-0.413058) | 0.047035 / 0.534201 (-0.487166) | 0.568881 / 0.579283 (-0.010402) | 0.662763 / 0.434364 (0.228399) | 0.668927 / 0.540337 (0.128590) | 0.755766 / 1.386936 (-0.631170) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010017 / 0.011353 (-0.001336) | 0.006816 / 0.011008 (-0.004192) | 0.105038 / 0.038508 (0.066529) | 0.038689 / 0.023109 (0.015580) | 0.482113 / 0.275898 (0.206215) | 0.540072 / 0.323480 (0.216592) | 0.007738 / 0.007986 (-0.000248) | 0.005134 / 0.004328 (0.000806) | 0.102203 / 0.004250 (0.097953) | 0.054080 / 0.037052 (0.017028) | 0.501057 / 0.258489 (0.242568) | 0.567186 / 0.293841 (0.273345) | 0.060330 / 0.128546 (-0.068217) | 0.020059 / 0.075646 (-0.055587) | 0.123102 / 0.419271 (-0.296170) | 0.063426 / 0.043533 (0.019893) | 0.494171 / 0.255139 (0.239032) | 0.538238 / 0.283200 (0.255039) | 0.119613 / 0.141683 (-0.022069) | 1.853728 / 1.452155 (0.401574) | 1.984621 / 1.492716 (0.491904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282511 / 0.018006 (0.264505) | 0.563190 / 0.000490 (0.562700) | 0.000465 / 0.000200 (0.000265) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029267 / 0.037411 (-0.008144) | 0.135618 / 0.014526 (0.121093) | 0.146286 / 0.176557 (-0.030271) | 0.188570 / 0.737135 (-0.548565) | 0.155839 / 0.296338 (-0.140499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671660 / 0.215209 (0.456451) | 6.718775 / 2.077655 (4.641120) | 3.004601 / 1.504120 (1.500481) | 2.640504 / 1.541195 (1.099309) | 2.666788 / 1.468490 (1.198298) | 1.242655 / 4.584777 (-3.342122) | 5.780119 / 3.745712 (2.034407) | 3.247935 / 5.269862 (-2.021927) | 2.114007 / 4.565676 (-2.451669) | 0.147546 / 0.424275 (-0.276729) | 0.014408 / 0.007607 (0.006801) | 0.824407 / 0.226044 (0.598362) | 8.278185 / 2.268929 (6.009257) | 3.733463 / 55.444624 (-51.711161) | 2.976732 / 6.876477 (-3.899745) | 3.132758 / 2.142072 (0.990686) | 1.446095 / 4.805227 (-3.359132) | 0.258628 / 6.500664 (-6.242036) | 0.085513 / 0.075469 (0.010043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702681 / 1.841788 (-0.139106) | 18.725123 / 8.074308 (10.650815) | 19.622808 / 10.191392 (9.431416) | 0.215845 / 0.680424 (-0.464579) | 0.029246 / 0.534201 (-0.504955) | 0.554819 / 0.579283 (-0.024464) | 0.630926 / 0.434364 (0.196562) | 0.637663 / 0.540337 (0.097325) | 0.837948 / 1.386936 (-0.548988) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4a4f96ef0a4ec4b25f0872f160fa1eb9d2e711c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008540 / 0.011353 (-0.002813) | 0.004538 / 0.011008 (-0.006470) | 0.101507 / 0.038508 (0.062999) | 0.029751 / 0.023109 (0.006641) | 0.292608 / 0.275898 (0.016710) | 0.354734 / 0.323480 (0.031254) | 0.007430 / 0.007986 (-0.000556) | 0.003365 / 0.004328 (-0.000964) | 0.078703 / 0.004250 (0.074452) | 0.034858 / 0.037052 (-0.002194) | 0.303518 / 0.258489 (0.045029) | 0.336523 / 0.293841 (0.042682) | 0.033741 / 0.128546 (-0.094805) | 0.011460 / 0.075646 (-0.064186) | 0.319551 / 0.419271 (-0.099721) | 0.041102 / 0.043533 (-0.002431) | 0.295914 / 0.255139 (0.040775) | 0.322142 / 0.283200 (0.038943) | 0.084694 / 0.141683 (-0.056989) | 1.481308 / 1.452155 (0.029153) | 1.530271 / 1.492716 (0.037554) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180516 / 0.018006 (0.162510) | 0.405741 / 0.000490 (0.405251) | 0.002806 / 0.000200 (0.002606) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023359 / 0.037411 (-0.014052) | 0.096950 / 0.014526 (0.082424) | 0.103991 / 0.176557 (-0.072566) | 0.143700 / 0.737135 (-0.593435) | 0.106764 / 0.296338 (-0.189575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416966 / 0.215209 (0.201757) | 4.145601 / 2.077655 (2.067946) | 1.838258 / 1.504120 (0.334139) | 1.629396 / 1.541195 (0.088201) | 1.649707 / 1.468490 (0.181217) | 0.689624 / 4.584777 (-3.895153) | 3.414584 / 3.745712 (-0.331129) | 1.874295 / 5.269862 (-3.395566) | 1.251930 / 4.565676 (-3.313746) | 0.081782 / 0.424275 (-0.342493) | 0.012868 / 0.007607 (0.005261) | 0.523904 / 0.226044 (0.297859) | 5.251032 / 2.268929 (2.982104) | 2.301549 / 55.444624 (-53.143075) | 1.942110 / 6.876477 (-4.934367) | 2.023014 / 2.142072 (-0.119058) | 0.816492 / 4.805227 (-3.988736) | 0.150107 / 6.500664 (-6.350558) | 0.065118 / 0.075469 (-0.010351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226433 / 1.841788 (-0.615355) | 13.852569 / 8.074308 (5.778261) | 13.862779 / 10.191392 (3.671387) | 0.146361 / 0.680424 (-0.534062) | 0.028652 / 0.534201 (-0.505549) | 0.398251 / 0.579283 (-0.181032) | 0.403590 / 0.434364 (-0.030774) | 0.492184 / 0.540337 (-0.048154) | 0.581040 / 1.386936 (-0.805896) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006859 / 0.011353 (-0.004494) | 0.004632 / 0.011008 (-0.006376) | 0.076653 / 0.038508 (0.038145) | 0.027865 / 0.023109 (0.004755) | 0.354472 / 0.275898 (0.078573) | 0.385462 / 0.323480 (0.061982) | 0.005125 / 0.007986 (-0.002861) | 0.003420 / 0.004328 (-0.000909) | 0.076018 / 0.004250 (0.071768) | 0.040197 / 0.037052 (0.003144) | 0.353675 / 0.258489 (0.095186) | 0.394911 / 0.293841 (0.101070) | 0.032909 / 0.128546 (-0.095637) | 0.011713 / 0.075646 (-0.063933) | 0.085921 / 0.419271 (-0.333350) | 0.044462 / 0.043533 (0.000929) | 0.349997 / 0.255139 (0.094858) | 0.375207 / 0.283200 (0.092008) | 0.091288 / 0.141683 (-0.050394) | 1.536515 / 1.452155 (0.084361) | 1.581878 / 1.492716 (0.089162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273284 / 0.018006 (0.255277) | 0.424457 / 0.000490 (0.423967) | 0.044659 / 0.000200 (0.044459) | 0.000247 / 0.000054 (0.000192) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025473 / 0.037411 (-0.011938) | 0.100014 / 0.014526 (0.085488) | 0.108551 / 0.176557 (-0.068006) | 0.147913 / 0.737135 (-0.589223) | 0.112729 / 0.296338 (-0.183610) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448162 / 0.215209 (0.232953) | 4.472701 / 2.077655 (2.395046) | 2.078384 / 1.504120 (0.574264) | 1.861292 / 1.541195 (0.320097) | 1.920482 / 1.468490 (0.451991) | 0.706968 / 4.584777 (-3.877809) | 3.433109 / 3.745712 (-0.312603) | 1.898684 / 5.269862 (-3.371178) | 1.174375 / 4.565676 (-3.391302) | 0.083666 / 0.424275 (-0.340609) | 0.012388 / 0.007607 (0.004781) | 0.546011 / 0.226044 (0.319966) | 5.487514 / 2.268929 (3.218585) | 2.534124 / 55.444624 (-52.910500) | 2.168441 / 6.876477 (-4.708036) | 2.203458 / 2.142072 (0.061386) | 0.813333 / 4.805227 (-3.991894) | 0.153169 / 6.500664 (-6.347495) | 0.067151 / 0.075469 (-0.008318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277815 / 1.841788 (-0.563972) | 13.920545 / 8.074308 (5.846237) | 13.473801 / 10.191392 (3.282409) | 0.129035 / 0.680424 (-0.551389) | 0.016737 / 0.534201 (-0.517464) | 0.388413 / 0.579283 (-0.190870) | 0.388785 / 0.434364 (-0.045579) | 0.481735 / 0.540337 (-0.058602) | 0.576390 / 1.386936 (-0.810546) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4a4f96ef0a4ec4b25f0872f160fa1eb9d2e711c \"CML watermark\")\n" ]
"2023-01-24T17:18:59Z"
"2023-02-01T08:45:33Z"
"2023-01-31T08:37:54Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5459.diff", "html_url": "https://github.com/huggingface/datasets/pull/5459", "merged_at": "2023-01-31T08:37:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/5459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5459" }
The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'` This is a problem for our Hugging Face Hub, which requires exact URL from location header. Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `response-content-disposition` contains `%27`: ``` response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B ``` and after the requoting, the `%27` characters get unquoted to `'`: ``` response-content-disposition=attachment%3B+filename*%3DUTF-8''sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B ``` This PR disables the `aiohttp` requoting of redirection URLs.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5459/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5343/comments
https://api.github.com/repos/huggingface/datasets/issues/5343/events
https://github.com/huggingface/datasets/issues/5343
1,485,297,823
I_kwDODunzps5Yh9if
5,343
T5 for Q&A produces truncated sentence
{ "avatar_url": "https://avatars.githubusercontent.com/u/13484072?v=4", "events_url": "https://api.github.com/users/junyongyou/events{/privacy}", "followers_url": "https://api.github.com/users/junyongyou/followers", "following_url": "https://api.github.com/users/junyongyou/following{/other_user}", "gists_url": "https://api.github.com/users/junyongyou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/junyongyou", "id": 13484072, "login": "junyongyou", "node_id": "MDQ6VXNlcjEzNDg0MDcy", "organizations_url": "https://api.github.com/users/junyongyou/orgs", "received_events_url": "https://api.github.com/users/junyongyou/received_events", "repos_url": "https://api.github.com/users/junyongyou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/junyongyou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junyongyou/subscriptions", "type": "User", "url": "https://api.github.com/users/junyongyou" }
[]
closed
false
null
[]
null
[]
"2022-12-08T19:48:46Z"
"2022-12-08T19:57:17Z"
"2022-12-08T19:57:17Z"
NONE
null
null
null
Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to train the T5-large model. I have two questions. For example, I set both the max_length, max_input_length, max_output_length to 128. How to deal with those long answers? I just left them as is and the T5Tokenizer can automatically handle. I would assume the tokenizer just truncates an answer at the position of 128th word (or 127th). Is it possible that I manually split an answer into different parts, each part has 128 words; and then all these sub-answers serve as a separate answer to the same question? Another question is that I get incomplete (truncated) answers when using the fine-tuned model in inference, even though the predicted answer is shorter than 128 words. I found a message posted 2 years ago saying that one should add at the end of texts when fine-tuning T5. I followed that but then got a warning message that duplicated were found. I am assuming that this is because the tokenizer truncates an answer text, thus is missing in the truncated answer, such that the end token is not produced in predicted answer. However, I am not sure. Can anybody point out how to address this issue? Any suggestions are highly appreciated. Below is some code snippet. ` import pytorch_lightning as pl from torch.utils.data import DataLoader import torch import numpy as np import time from pathlib import Path from transformers import ( Adafactor, T5ForConditionalGeneration, T5Tokenizer, get_linear_schedule_with_warmup ) from torch.utils.data import RandomSampler from question_answering.utils import * class T5FineTuner(pl.LightningModule): def __init__(self, hyparams): super(T5FineTuner, self).__init__() self.hyparams = hyparams self.model = T5ForConditionalGeneration.from_pretrained(hyparams.model_name_or_path) self.tokenizer = T5Tokenizer.from_pretrained(hyparams.tokenizer_name_or_path) if self.hyparams.freeze_embeds: self.freeze_embeds() if self.hyparams.freeze_encoder: self.freeze_params(self.model.get_encoder()) # assert_all_frozen() self.step_count = 0 self.output_dir = Path(self.hyparams.output_dir) n_observations_per_split = { 'train': self.hyparams.n_train, 'validation': self.hyparams.n_val, 'test': self.hyparams.n_test } self.n_obs = {k: v if v >= 0 else None for k, v in n_observations_per_split.items()} self.em_score_list = [] self.subset_score_list = [] data_folder = r'C:\Datasets\MedQuAD-master' self.train_data, self.val_data, self.test_data = load_medqa_data(data_folder) def freeze_params(self, model): for param in model.parameters(): param.requires_grad = False def freeze_embeds(self): try: self.freeze_params(self.model.model.shared) for d in [self.model.model.encoder, self.model.model.decoder]: self.freeze_params(d.embed_positions) self.freeze_params(d.embed_tokens) except AttributeError: self.freeze_params(self.model.shared) for d in [self.model.encoder, self.model.decoder]: self.freeze_params(d.embed_tokens) def lmap(self, f, x): return list(map(f, x)) def is_logger(self): return self.trainer.proc_rank <= 0 def forward(self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, labels=None): return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, labels=labels ) def _step(self, batch): labels = batch['target_ids'] labels[labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids = batch['source_ids'], attention_mask=batch['source_mask'], labels=labels, decoder_attention_mask=batch['target_mask'] ) loss = outputs[0] return loss def ids_to_clean_text(self, generated_ids): gen_text = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) return self.lmap(str.strip, gen_text) def _generative_step(self, batch): t0 = time.time() generated_ids = self.model.generate( batch["source_ids"], attention_mask=batch["source_mask"], use_cache=True, decoder_attention_mask=batch['target_mask'], max_length=128, num_beams=2, early_stopping=True ) preds = self.ids_to_clean_text(generated_ids) targets = self.ids_to_clean_text(batch["target_ids"]) gen_time = (time.time() - t0) / batch["source_ids"].shape[0] loss = self._step(batch) base_metrics = {'val_loss': loss} summ_len = np.mean(self.lmap(len, generated_ids)) base_metrics.update(gen_time=gen_time, gen_len=summ_len, preds=preds, target=targets) em_score, subset_match_score = calculate_scores(preds, targets) self.em_score_list.append(em_score) self.subset_score_list.append(subset_match_score) em_score = torch.tensor(em_score, dtype=torch.float32) subset_match_score = torch.tensor(subset_match_score, dtype=torch.float32) base_metrics.update(em_score=em_score, subset_match_score=subset_match_score) # rouge_results = self.rouge_metric.compute() # rouge_dict = self.parse_score(rouge_results) return base_metrics def training_step(self, batch, batch_idx): loss = self._step(batch) tensorboard_logs = {'train_loss': loss} return {'loss': loss, 'log': tensorboard_logs} def training_epoch_end(self, outputs): avg_train_loss = torch.stack([x['loss'] for x in outputs]).mean() tensorboard_logs = {'avg_train_loss': avg_train_loss} # return {'avg_train_loss': avg_train_loss, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs} def validation_step(self, batch, batch_idx): return self._generative_step(batch) def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {'val_loss': avg_loss} if len(self.em_score_list) <= 2: average_em_score = sum(self.em_score_list) / len(self.em_score_list) average_subset_match_score = sum(self.subset_score_list) / len(self.subset_score_list) else: latest_em_score = self.em_score_list[:-2] latest_subset_score = self.subset_score_list[:-2] average_em_score = sum(latest_em_score) / len(latest_em_score) average_subset_match_score = sum(latest_subset_score) / len(latest_subset_score) average_em_score = torch.tensor(average_em_score, dtype=torch.float32) average_subset_match_score = torch.tensor(average_subset_match_score, dtype=torch.float32) tensorboard_logs.update(em_score=average_em_score, subset_match_score=average_subset_match_score) self.target_gen = [] self.prediction_gen = [] return { 'avg_val_loss': avg_loss, 'em_score': average_em_score, 'subset_match_socre': average_subset_match_score, 'log': tensorboard_logs, 'progress_bar': tensorboard_logs } def configure_optimizers(self): model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hyparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = Adafactor(optimizer_grouped_parameters, lr=self.hyparams.learning_rate, scale_parameter=False, relative_step=False) self.opt = optimizer return [optimizer] def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure=None, on_tpu=False, using_native_amp=False, using_lbfgs=False): optimizer.step(closure=optimizer_closure) optimizer.zero_grad() self.lr_scheduler.step() def get_tqdm_dict(self): tqdm_dict = {"loss": "{:.3f}".format(self.trainer.avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]} return tqdm_dict def train_dataloader(self): n_samples = self.n_obs['train'] train_dataset = get_dataset(tokenizer=self.tokenizer, data=self.train_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(train_dataset) dataloader = DataLoader(train_dataset, sampler=sampler, batch_size=self.hyparams.train_batch_size, drop_last=True, num_workers=4) # t_total = ( # (len(dataloader.dataset) // (self.hyparams.train_batch_size * max(1, self.hyparams.n_gpu))) # // self.hyparams.gradient_accumulation_steps # * float(self.hyparams.num_train_epochs) # ) t_total = 100000 scheduler = get_linear_schedule_with_warmup( self.opt, num_warmup_steps=self.hyparams.warmup_steps, num_training_steps=t_total ) self.lr_scheduler = scheduler return dataloader def val_dataloader(self): n_samples = self.n_obs['validation'] validation_dataset = get_dataset(tokenizer=self.tokenizer, data=self.val_data, num_samples=n_samples, args=self.hyparams) sampler = RandomSampler(validation_dataset) return DataLoader(validation_dataset, shuffle=False, batch_size=self.hyparams.eval_batch_size, sampler=sampler, num_workers=4) def test_dataloader(self): n_samples = self.n_obs['test'] test_dataset = get_dataset(tokenizer=self.tokenizer, data=self.test_data, num_samples=n_samples, args=self.hyparams) return DataLoader(test_dataset, batch_size=self.hyparams.eval_batch_size, num_workers=4) def on_save_checkpoint(self, checkpoint): save_path = self.output_dir.joinpath("best_tfmr") self.model.config.save_step = self.step_count self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) import os import argparse import pytorch_lightning as pl from question_answering.t5_closed_book import T5FineTuner if __name__ == '__main__': args_dict = dict( output_dir="", # path to save the checkpoints model_name_or_path='t5-large', tokenizer_name_or_path='t5-large', max_input_length=128, max_output_length=128, freeze_encoder=False, freeze_embeds=False, learning_rate=1e-5, weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=0, train_batch_size=4, eval_batch_size=4, num_train_epochs=2, gradient_accumulation_steps=10, n_gpu=1, resume_from_checkpoint=None, val_check_interval=0.5, n_val=4000, n_train=-1, n_test=-1, early_stop_callback=False, fp_16=False, opt_level='O1', max_grad_norm=1.0, seed=101, ) args_dict.update({'output_dir': 't5_large_MedQuAD_256', 'num_train_epochs': 100, 'train_batch_size': 16, 'eval_batch_size': 16, 'learning_rate': 1e-3}) args = argparse.Namespace(**args_dict) checkpoint_callback = pl.callbacks.ModelCheckpoint(dirpath=args.output_dir, monitor="em_score", mode="max", save_top_k=1) ## If resuming from checkpoint, add an arg resume_from_checkpoint train_params = dict( accumulate_grad_batches=args.gradient_accumulation_steps, gpus=args.n_gpu, max_epochs=args.num_train_epochs, # early_stop_callback=False, precision=16 if args.fp_16 else 32, # amp_level=args.opt_level, # resume_from_checkpoint=args.resume_from_checkpoint, gradient_clip_val=args.max_grad_norm, checkpoint_callback=checkpoint_callback, val_check_interval=args.val_check_interval, # accelerator='dp' # logger=wandb_logger, # callbacks=[LoggingCallback()], ) model = T5FineTuner(args) trainer = pl.Trainer(**train_params) trainer.fit(model) `
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5343/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3466/comments
https://api.github.com/repos/huggingface/datasets/issues/3466/events
https://github.com/huggingface/datasets/pull/3466
1,085,722,837
PR_kwDODunzps4wII3w
3,466
Add CRASS dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apergo-ai", "id": 68908804, "login": "apergo-ai", "node_id": "MDQ6VXNlcjY4OTA4ODA0", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "repos_url": "https://api.github.com/users/apergo-ai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "type": "User", "url": "https://api.github.com/users/apergo-ai" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)", "Thanks for your contribution, @apergo-ai. \r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. It's OK for you? Please, feel free to tell us if you need some help." ]
"2021-12-21T11:17:22Z"
"2022-10-03T09:37:06Z"
"2022-10-03T09:37:06Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3466.diff", "html_url": "https://github.com/huggingface/datasets/pull/3466", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3466" }
Added crass dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3466/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4135/comments
https://api.github.com/repos/huggingface/datasets/issues/4135/events
https://github.com/huggingface/datasets/pull/4135
1,198,307,610
PR_kwDODunzps416-Rn
4,135
Support streaming xtreme dataset for PAN-X config
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-04-09T06:19:48Z"
"2022-05-06T08:39:40Z"
"2022-04-11T06:54:14Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4135.diff", "html_url": "https://github.com/huggingface/datasets/pull/4135", "merged_at": "2022-04-11T06:54:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4135.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4135" }
Support streaming xtreme dataset for PAN-X config.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4135/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1376/comments
https://api.github.com/repos/huggingface/datasets/issues/1376/events
https://github.com/huggingface/datasets/pull/1376
760,309,300
MDExOlB1bGxSZXF1ZXN0NTM1MTYyODU4
1,376
Add SETimes Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
"2020-12-09T13:01:08Z"
"2020-12-10T16:11:57Z"
"2020-12-10T16:11:56Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1376.diff", "html_url": "https://github.com/huggingface/datasets/pull/1376", "merged_at": "2020-12-10T16:11:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1376.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1376" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1376/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1376/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/84
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/84/comments
https://api.github.com/repos/huggingface/datasets/issues/84/events
https://github.com/huggingface/datasets/pull/84
617,249,815
MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz
84
[TedHrLr] add left dummy data
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-13T08:27:20Z"
"2020-05-13T08:29:22Z"
"2020-05-13T08:29:21Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/84.diff", "html_url": "https://github.com/huggingface/datasets/pull/84", "merged_at": "2020-05-13T08:29:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/84.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/84" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/84/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/84/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1655/comments
https://api.github.com/repos/huggingface/datasets/issues/1655/events
https://github.com/huggingface/datasets/pull/1655
775,643,418
MDExOlB1bGxSZXF1ZXN0NTQ2MjgyOTM4
1,655
assin dataset: add instances and data splits info
{ "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonatasgrosman", "id": 5097052, "login": "jonatasgrosman", "node_id": "MDQ6VXNlcjUwOTcwNTI=", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "type": "User", "url": "https://api.github.com/users/jonatasgrosman" }
[]
closed
false
null
[]
null
[]
"2020-12-29T00:47:56Z"
"2020-12-30T16:50:23Z"
"2020-12-30T16:50:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1655.diff", "html_url": "https://github.com/huggingface/datasets/pull/1655", "merged_at": "2020-12-30T16:50:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1655.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1655" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1655/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5462/comments
https://api.github.com/repos/huggingface/datasets/issues/5462/events
https://github.com/huggingface/datasets/pull/5462
1,556,572,144
PR_kwDODunzps5Iglqu
5,462
Concatenate on axis=1 with misaligned blocks
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008860 / 0.011353 (-0.002493) | 0.004564 / 0.011008 (-0.006444) | 0.101556 / 0.038508 (0.063048) | 0.030000 / 0.023109 (0.006891) | 0.304404 / 0.275898 (0.028506) | 0.366247 / 0.323480 (0.042767) | 0.007182 / 0.007986 (-0.000804) | 0.003583 / 0.004328 (-0.000746) | 0.079665 / 0.004250 (0.075415) | 0.036529 / 0.037052 (-0.000523) | 0.310998 / 0.258489 (0.052509) | 0.346954 / 0.293841 (0.053113) | 0.034098 / 0.128546 (-0.094448) | 0.011576 / 0.075646 (-0.064070) | 0.320448 / 0.419271 (-0.098824) | 0.043328 / 0.043533 (-0.000205) | 0.307317 / 0.255139 (0.052178) | 0.325071 / 0.283200 (0.041871) | 0.096406 / 0.141683 (-0.045277) | 1.540331 / 1.452155 (0.088176) | 1.589533 / 1.492716 (0.096817) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011034 / 0.018006 (-0.006972) | 0.422066 / 0.000490 (0.421577) | 0.002409 / 0.000200 (0.002209) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023703 / 0.037411 (-0.013708) | 0.099935 / 0.014526 (0.085409) | 0.105966 / 0.176557 (-0.070591) | 0.142259 / 0.737135 (-0.594876) | 0.109327 / 0.296338 (-0.187011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418381 / 0.215209 (0.203172) | 4.177564 / 2.077655 (2.099909) | 1.880196 / 1.504120 (0.376076) | 1.669169 / 1.541195 (0.127974) | 1.725989 / 1.468490 (0.257499) | 0.689384 / 4.584777 (-3.895393) | 3.380963 / 3.745712 (-0.364749) | 1.884192 / 5.269862 (-3.385670) | 1.162409 / 4.565676 (-3.403268) | 0.082045 / 0.424275 (-0.342230) | 0.012575 / 0.007607 (0.004968) | 0.525824 / 0.226044 (0.299779) | 5.272574 / 2.268929 (3.003646) | 2.283492 / 55.444624 (-53.161132) | 1.947390 / 6.876477 (-4.929087) | 2.013790 / 2.142072 (-0.128283) | 0.806280 / 4.805227 (-3.998948) | 0.149267 / 6.500664 (-6.351397) | 0.066967 / 0.075469 (-0.008502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216511 / 1.841788 (-0.625277) | 13.869829 / 8.074308 (5.795521) | 14.189967 / 10.191392 (3.998575) | 0.148716 / 0.680424 (-0.531708) | 0.028324 / 0.534201 (-0.505877) | 0.390856 / 0.579283 (-0.188427) | 0.404389 / 0.434364 (-0.029975) | 0.456050 / 0.540337 (-0.084287) | 0.544139 / 1.386936 (-0.842797) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006727 / 0.011353 (-0.004626) | 0.004515 / 0.011008 (-0.006494) | 0.098791 / 0.038508 (0.060283) | 0.027596 / 0.023109 (0.004487) | 0.439066 / 0.275898 (0.163168) | 0.480555 / 0.323480 (0.157076) | 0.005066 / 0.007986 (-0.002920) | 0.004669 / 0.004328 (0.000341) | 0.075334 / 0.004250 (0.071084) | 0.039779 / 0.037052 (0.002726) | 0.439860 / 0.258489 (0.181371) | 0.480787 / 0.293841 (0.186946) | 0.031550 / 0.128546 (-0.096996) | 0.011668 / 0.075646 (-0.063978) | 0.317348 / 0.419271 (-0.101923) | 0.041312 / 0.043533 (-0.002220) | 0.442934 / 0.255139 (0.187795) | 0.463677 / 0.283200 (0.180478) | 0.090066 / 0.141683 (-0.051617) | 1.544152 / 1.452155 (0.091998) | 1.584455 / 1.492716 (0.091738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224284 / 0.018006 (0.206278) | 0.406982 / 0.000490 (0.406492) | 0.000427 / 0.000200 (0.000227) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024914 / 0.037411 (-0.012497) | 0.102608 / 0.014526 (0.088082) | 0.106931 / 0.176557 (-0.069626) | 0.140828 / 0.737135 (-0.596308) | 0.112015 / 0.296338 (-0.184324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471078 / 0.215209 (0.255869) | 4.705742 / 2.077655 (2.628088) | 2.437442 / 1.504120 (0.933322) | 2.242768 / 1.541195 (0.701573) | 2.302158 / 1.468490 (0.833668) | 0.697314 / 4.584777 (-3.887462) | 3.357730 / 3.745712 (-0.387982) | 1.913306 / 5.269862 (-3.356556) | 1.173879 / 4.565676 (-3.391798) | 0.083257 / 0.424275 (-0.341018) | 0.012480 / 0.007607 (0.004873) | 0.573407 / 0.226044 (0.347362) | 5.728650 / 2.268929 (3.459721) | 2.868863 / 55.444624 (-52.575761) | 2.548640 / 6.876477 (-4.327837) | 2.596622 / 2.142072 (0.454549) | 0.805563 / 4.805227 (-3.999664) | 0.150860 / 6.500664 (-6.349804) | 0.068344 / 0.075469 (-0.007125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300368 / 1.841788 (-0.541420) | 13.920451 / 8.074308 (5.846143) | 14.222430 / 10.191392 (4.031038) | 0.152497 / 0.680424 (-0.527927) | 0.017415 / 0.534201 (-0.516786) | 0.378827 / 0.579283 (-0.200456) | 0.384165 / 0.434364 (-0.050199) | 0.439364 / 0.540337 (-0.100973) | 0.525710 / 1.386936 (-0.861226) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2cd22277fa87e02ad9970483f5b75aacdfbf9a70 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008482 / 0.011353 (-0.002871) | 0.004405 / 0.011008 (-0.006604) | 0.099662 / 0.038508 (0.061154) | 0.029062 / 0.023109 (0.005953) | 0.298329 / 0.275898 (0.022431) | 0.332837 / 0.323480 (0.009357) | 0.006760 / 0.007986 (-0.001225) | 0.003290 / 0.004328 (-0.001039) | 0.077659 / 0.004250 (0.073409) | 0.034745 / 0.037052 (-0.002307) | 0.303134 / 0.258489 (0.044644) | 0.346402 / 0.293841 (0.052561) | 0.033511 / 0.128546 (-0.095035) | 0.011464 / 0.075646 (-0.064183) | 0.322932 / 0.419271 (-0.096340) | 0.040697 / 0.043533 (-0.002836) | 0.301951 / 0.255139 (0.046812) | 0.328961 / 0.283200 (0.045761) | 0.084802 / 0.141683 (-0.056881) | 1.506247 / 1.452155 (0.054092) | 1.547631 / 1.492716 (0.054915) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190370 / 0.018006 (0.172363) | 0.405786 / 0.000490 (0.405297) | 0.002196 / 0.000200 (0.001997) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022958 / 0.037411 (-0.014453) | 0.095736 / 0.014526 (0.081210) | 0.103684 / 0.176557 (-0.072872) | 0.138200 / 0.737135 (-0.598936) | 0.105618 / 0.296338 (-0.190721) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415239 / 0.215209 (0.200030) | 4.147223 / 2.077655 (2.069569) | 1.850322 / 1.504120 (0.346202) | 1.662815 / 1.541195 (0.121620) | 1.671563 / 1.468490 (0.203073) | 0.693806 / 4.584777 (-3.890971) | 3.352938 / 3.745712 (-0.392774) | 1.849257 / 5.269862 (-3.420604) | 1.161603 / 4.565676 (-3.404074) | 0.081884 / 0.424275 (-0.342391) | 0.012726 / 0.007607 (0.005119) | 0.521105 / 0.226044 (0.295061) | 5.231910 / 2.268929 (2.962981) | 2.306073 / 55.444624 (-53.138551) | 1.950449 / 6.876477 (-4.926028) | 1.988433 / 2.142072 (-0.153640) | 0.811168 / 4.805227 (-3.994059) | 0.149960 / 6.500664 (-6.350704) | 0.064845 / 0.075469 (-0.010624) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221487 / 1.841788 (-0.620301) | 13.756534 / 8.074308 (5.682226) | 13.825369 / 10.191392 (3.633977) | 0.155641 / 0.680424 (-0.524783) | 0.028444 / 0.534201 (-0.505757) | 0.390364 / 0.579283 (-0.188919) | 0.397592 / 0.434364 (-0.036772) | 0.455905 / 0.540337 (-0.084433) | 0.534606 / 1.386936 (-0.852330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006281 / 0.011353 (-0.005071) | 0.004533 / 0.011008 (-0.006475) | 0.098328 / 0.038508 (0.059820) | 0.026998 / 0.023109 (0.003889) | 0.424814 / 0.275898 (0.148915) | 0.457653 / 0.323480 (0.134173) | 0.004617 / 0.007986 (-0.003368) | 0.003320 / 0.004328 (-0.001009) | 0.075884 / 0.004250 (0.071634) | 0.035865 / 0.037052 (-0.001187) | 0.431674 / 0.258489 (0.173185) | 0.468286 / 0.293841 (0.174445) | 0.031915 / 0.128546 (-0.096631) | 0.011680 / 0.075646 (-0.063967) | 0.319575 / 0.419271 (-0.099696) | 0.047792 / 0.043533 (0.004259) | 0.428191 / 0.255139 (0.173052) | 0.445657 / 0.283200 (0.162458) | 0.090464 / 0.141683 (-0.051218) | 1.465480 / 1.452155 (0.013326) | 1.548985 / 1.492716 (0.056268) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185671 / 0.018006 (0.167664) | 0.399274 / 0.000490 (0.398784) | 0.002822 / 0.000200 (0.002622) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025934 / 0.037411 (-0.011477) | 0.099480 / 0.014526 (0.084954) | 0.110264 / 0.176557 (-0.066293) | 0.140558 / 0.737135 (-0.596577) | 0.110832 / 0.296338 (-0.185507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473491 / 0.215209 (0.258282) | 4.722507 / 2.077655 (2.644852) | 2.456242 / 1.504120 (0.952122) | 2.255999 / 1.541195 (0.714804) | 2.300816 / 1.468490 (0.832326) | 0.698226 / 4.584777 (-3.886551) | 3.397296 / 3.745712 (-0.348416) | 2.741674 / 5.269862 (-2.528187) | 1.462103 / 4.565676 (-3.103573) | 0.082736 / 0.424275 (-0.341539) | 0.012183 / 0.007607 (0.004576) | 0.580144 / 0.226044 (0.354099) | 5.794351 / 2.268929 (3.525422) | 2.881201 / 55.444624 (-52.563423) | 2.544384 / 6.876477 (-4.332093) | 2.555227 / 2.142072 (0.413154) | 0.805849 / 4.805227 (-3.999378) | 0.151822 / 6.500664 (-6.348842) | 0.067477 / 0.075469 (-0.007992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.300224 / 1.841788 (-0.541564) | 13.595361 / 8.074308 (5.521053) | 13.967622 / 10.191392 (3.776230) | 0.129222 / 0.680424 (-0.551202) | 0.016939 / 0.534201 (-0.517262) | 0.375190 / 0.579283 (-0.204094) | 0.383511 / 0.434364 (-0.050853) | 0.437179 / 0.540337 (-0.103158) | 0.525674 / 1.386936 (-0.861262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7ed52db3d67cc8d0f2adfe53b2ec8d1124a174b8 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012364 / 0.011353 (0.001011) | 0.006098 / 0.011008 (-0.004911) | 0.158908 / 0.038508 (0.120400) | 0.039798 / 0.023109 (0.016689) | 0.383786 / 0.275898 (0.107888) | 0.533961 / 0.323480 (0.210481) | 0.012079 / 0.007986 (0.004094) | 0.006483 / 0.004328 (0.002155) | 0.109660 / 0.004250 (0.105410) | 0.048391 / 0.037052 (0.011339) | 0.447426 / 0.258489 (0.188937) | 0.477292 / 0.293841 (0.183451) | 0.066492 / 0.128546 (-0.062054) | 0.021155 / 0.075646 (-0.054492) | 0.474473 / 0.419271 (0.055202) | 0.063520 / 0.043533 (0.019987) | 0.444941 / 0.255139 (0.189802) | 0.450675 / 0.283200 (0.167475) | 0.129236 / 0.141683 (-0.012447) | 2.009362 / 1.452155 (0.557207) | 1.912067 / 1.492716 (0.419350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.260384 / 0.018006 (0.242378) | 0.577654 / 0.000490 (0.577165) | 0.004977 / 0.000200 (0.004777) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028101 / 0.037411 (-0.009310) | 0.161680 / 0.014526 (0.147154) | 0.146107 / 0.176557 (-0.030450) | 0.173878 / 0.737135 (-0.563257) | 0.186149 / 0.296338 (-0.110190) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.689835 / 0.215209 (0.474626) | 6.775888 / 2.077655 (4.698234) | 2.885499 / 1.504120 (1.381379) | 2.486855 / 1.541195 (0.945660) | 2.540831 / 1.468490 (1.072341) | 1.328135 / 4.584777 (-3.256642) | 5.964983 / 3.745712 (2.219271) | 3.400713 / 5.269862 (-1.869149) | 2.423257 / 4.565676 (-2.142419) | 0.129767 / 0.424275 (-0.294508) | 0.017936 / 0.007607 (0.010328) | 0.909284 / 0.226044 (0.683239) | 8.778791 / 2.268929 (6.509863) | 3.890757 / 55.444624 (-51.553867) | 3.072116 / 6.876477 (-3.804360) | 3.085390 / 2.142072 (0.943318) | 1.571710 / 4.805227 (-3.233517) | 0.279290 / 6.500664 (-6.221374) | 0.087775 / 0.075469 (0.012306) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.751223 / 1.841788 (-0.090564) | 20.313135 / 8.074308 (12.238827) | 22.793800 / 10.191392 (12.602408) | 0.296052 / 0.680424 (-0.384372) | 0.053420 / 0.534201 (-0.480781) | 0.600626 / 0.579283 (0.021343) | 0.634505 / 0.434364 (0.200142) | 0.724000 / 0.540337 (0.183663) | 0.869283 / 1.386936 (-0.517653) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014876 / 0.011353 (0.003523) | 0.008113 / 0.011008 (-0.002895) | 0.177038 / 0.038508 (0.138530) | 0.050825 / 0.023109 (0.027716) | 0.473989 / 0.275898 (0.198091) | 0.601058 / 0.323480 (0.277578) | 0.007536 / 0.007986 (-0.000450) | 0.006761 / 0.004328 (0.002432) | 0.105260 / 0.004250 (0.101010) | 0.073960 / 0.037052 (0.036908) | 0.447711 / 0.258489 (0.189222) | 0.609998 / 0.293841 (0.316157) | 0.061280 / 0.128546 (-0.067267) | 0.019370 / 0.075646 (-0.056276) | 0.510466 / 0.419271 (0.091194) | 0.062695 / 0.043533 (0.019162) | 0.436778 / 0.255139 (0.181639) | 0.489916 / 0.283200 (0.206717) | 0.137305 / 0.141683 (-0.004378) | 1.801554 / 1.452155 (0.349399) | 2.082409 / 1.492716 (0.589692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291304 / 0.018006 (0.273298) | 0.599041 / 0.000490 (0.598551) | 0.008017 / 0.000200 (0.007817) | 0.000127 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031243 / 0.037411 (-0.006169) | 0.139689 / 0.014526 (0.125163) | 0.138678 / 0.176557 (-0.037878) | 0.180458 / 0.737135 (-0.556677) | 0.149753 / 0.296338 (-0.146585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.699692 / 0.215209 (0.484482) | 7.273327 / 2.077655 (5.195672) | 3.222650 / 1.504120 (1.718530) | 2.679424 / 1.541195 (1.138229) | 2.842378 / 1.468490 (1.373888) | 1.394633 / 4.584777 (-3.190143) | 6.379970 / 3.745712 (2.634258) | 5.944663 / 5.269862 (0.674801) | 3.105214 / 4.565676 (-1.460462) | 0.138790 / 0.424275 (-0.285485) | 0.014211 / 0.007607 (0.006604) | 0.815275 / 0.226044 (0.589230) | 8.549334 / 2.268929 (6.280405) | 3.754795 / 55.444624 (-51.689829) | 3.125222 / 6.876477 (-3.751255) | 3.269639 / 2.142072 (1.127566) | 1.464187 / 4.805227 (-3.341040) | 0.314557 / 6.500664 (-6.186107) | 0.107354 / 0.075469 (0.031885) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480793 / 1.841788 (-0.360995) | 16.770328 / 8.074308 (8.696019) | 18.054861 / 10.191392 (7.863469) | 0.198257 / 0.680424 (-0.482167) | 0.026493 / 0.534201 (-0.507708) | 0.489701 / 0.579283 (-0.089582) | 0.540890 / 0.434364 (0.106526) | 0.566675 / 0.540337 (0.026337) | 0.661918 / 1.386936 (-0.725018) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4b839b50e9a81693e065f5299990026b97f6580 \"CML watermark\")\n" ]
"2023-01-25T12:33:22Z"
"2023-01-26T09:37:00Z"
"2023-01-26T09:27:19Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5462.diff", "html_url": "https://github.com/huggingface/datasets/pull/5462", "merged_at": "2023-01-26T09:27:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/5462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5462" }
Allow to concatenate on axis 1 two tables made of misaligned blocks. For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each. To do that, I slice the row blocks to re-align the blocks. Fix https://github.com/huggingface/datasets/issues/5413
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5462/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5985/comments
https://api.github.com/repos/huggingface/datasets/issues/5985/events
https://github.com/huggingface/datasets/issues/5985
1,771,588,158
I_kwDODunzps5pmEo-
5,985
Cannot reuse tokenizer object for dataset map
{ "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikigenius", "id": 12724810, "login": "vikigenius", "node_id": "MDQ6VXNlcjEyNzI0ODEw", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "repos_url": "https://api.github.com/users/vikigenius/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "type": "User", "url": "https://api.github.com/users/vikigenius" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
[ "This is a known issue: https://github.com/huggingface/datasets/issues/3847.\r\n\r\nFixing this requires significant work - rewriting the `tokenizers` lib to make them immutable.\r\n\r\nThe current solution is to pass `cache_file_name` to `map` to use that file for caching or calling a tokenizer before `map` (with the same set of parameters as the ones in the map transform)", "Closing since this is a duplicate" ]
"2023-06-23T14:45:31Z"
"2023-07-21T14:09:14Z"
"2023-07-21T14:09:14Z"
NONE
null
null
null
### Describe the bug Related to https://github.com/huggingface/transformers/issues/24441. Not sure if this is a tokenizer issue or caching issue, so filing in both. Passing the tokenizer to the dataset map function causes the tokenizer to be fingerprinted weirdly. After calling the tokenizer with arguments like padding and truncation the tokenizer object changes interanally, even though the hash remains the same. But dumps is able to detect that internal change which causes the tokenizer object's fingerprint to change. ### Steps to reproduce the bug ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') t.save_pretrained("tok1") th1 = hash(dumps(t)) text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) t.save_pretrained("tok2") th2 = hash(dumps(t)) assert th1 == th2 # Assertion Error ``` But if you use just the hash of the object without dumps, the hashes don't change ```python from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets t = AutoTokenizer.from_pretrained('bert-base-uncased') th1 = hash(t) # Just hash no dumps text = "This is an example text" ttext = t(text, max_length=512, padding="max_length", truncation=True) th2 = hash(t) # Just hash no dumps assert th1 == th2 # This is OK ``` This causes situations such as the following 1. Create a text file like this `yes "This is an example text" | head -n 10000 > lines.txt` ```python from transformers import AutoTokenizer import datasets class TokenizeMapper(object): """Mapper for tokenizer. This is needed because the caching mechanism of HuggingFace does not work on lambdas. Each time a new lambda will be created by a new process which will lead to a different hash. This way we can have a universal mapper object in init and reuse it with the same hash for each process. """ def __init__(self, tokenizer): """Initialize the tokenizer.""" self.tokenizer = tokenizer def __call__(self, examples, **kwargs): """Run the mapper.""" texts = examples["text"] tt = self.tokenizer(texts, max_length=256, padding="max_length", truncation=True) batch_outputs = { "input_ids": tt.input_ids, "attention_mask": tt.attention_mask, } return batch_outputs t = AutoTokenizer.from_pretrained('bert-base-uncased') mapper = TokenizeMapper(t) ds = datasets.load_dataset("text", data_files="lines.txt") mds1 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") mds2 = ds.map( mapper, batched=False, remove_columns=["text"], ).with_format("torch") ``` The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Expected behavior We should be able to initialize a tokenizer. And reusing it should let us reuse the same map computation for the same dataset. The second call to map should reuse the cached processed dataset from mds1, but it instead it redoes the tokenization because of the behavior of dumps. ### Environment info - `datasets` version: 2.13.0 - Platform: Linux-6.1.31_1-x86_64-with-glibc2.36 - Python version: 3.9.16 - Huggingface_hub version: 0.15.1 - PyArrow version: 12.0.1 - Pandas version: 2.0.2
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5985/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5985/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2341/comments
https://api.github.com/repos/huggingface/datasets/issues/2341/events
https://github.com/huggingface/datasets/pull/2341
882,370,933
MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2
2,341
Added the Ascent KB
{ "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phongnt570", "id": 6749421, "login": "phongnt570", "node_id": "MDQ6VXNlcjY3NDk0MjE=", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "repos_url": "https://api.github.com/users/phongnt570/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "type": "User", "url": "https://api.github.com/users/phongnt570" }
[]
closed
false
null
[]
null
[ "Thanks for approving it!" ]
"2021-05-09T14:17:39Z"
"2021-05-11T09:16:59Z"
"2021-05-11T09:16:59Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2341.diff", "html_url": "https://github.com/huggingface/datasets/pull/2341", "merged_at": "2021-05-11T09:16:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/2341.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2341" }
Added the Ascent Commonsense KB of 8.9M assertions. - Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905) - Website: https://ascent.mpi-inf.mpg.de/ (I am the author of the dataset)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2341/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/600/comments
https://api.github.com/repos/huggingface/datasets/issues/600/events
https://github.com/huggingface/datasets/issues/600
697,496,913
MDU6SXNzdWU2OTc0OTY5MTM=
600
Pickling error when loading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17310286?v=4", "events_url": "https://api.github.com/users/kandorm/events{/privacy}", "followers_url": "https://api.github.com/users/kandorm/followers", "following_url": "https://api.github.com/users/kandorm/following{/other_user}", "gists_url": "https://api.github.com/users/kandorm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kandorm", "id": 17310286, "login": "kandorm", "node_id": "MDQ6VXNlcjE3MzEwMjg2", "organizations_url": "https://api.github.com/users/kandorm/orgs", "received_events_url": "https://api.github.com/users/kandorm/received_events", "repos_url": "https://api.github.com/users/kandorm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kandorm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kandorm/subscriptions", "type": "User", "url": "https://api.github.com/users/kandorm" }
[]
closed
false
null
[]
null
[ "When I change from python3.6 to python3.8, it works! ", "Does it work when you install `nlp` from source on python 3.6?", "No, still the pickling error.", "I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also tried nlp 0.4.0)\r\n\r\nIf I try\r\n\r\n```python\r\nfrom datasets import load_dataset # or from nlp\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=512), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nIt runs without error", "Closing since it looks like it's working on >= 3.6.9\r\nFeel free to re-open if you have other questions :)" ]
"2020-09-10T06:28:08Z"
"2020-09-25T14:31:54Z"
"2020-09-25T14:31:54Z"
NONE
null
null
null
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ``` When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error: ``` Traceback (most recent call last): File "src/run_language_modeling.py", line 319, in <module> main() File "src/run_language_modeling.py", line 248, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "src/run_language_modeling.py", line 139, in get_dataset dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map new_fingerprint=new_fingerprint, File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/data/nlp/src/nlp/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps dump(obj, file) File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump Pickler(file, recurse=True).dump(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump StockPickler.dump(self, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type obj.__bases__, _dict), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/600/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3425/comments
https://api.github.com/repos/huggingface/datasets/issues/3425/events
https://github.com/huggingface/datasets/issues/3425
1,078,598,140
I_kwDODunzps5AShn8
3,425
Getting configs names takes too long
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "maybe related to https://github.com/huggingface/datasets/issues/2859\r\n", "It looks like it's currently calling `HfFileSystem.ls()` ~8 times at the root and for each subdirectory:\r\n- \"\"\r\n- \"en.noblocklist\"\r\n- \"en.noclean\"\r\n- \"en\"\r\n- \"multilingual\"\r\n- \"realnewslike\"\r\n\r\nCurrently `ls` is slow because it iterates on all the files inside the repository.\r\n\r\nAn easy optimization would be to cache the result of each call to `ls`.\r\nWe can also optimize `ls` by using a tree structure per directory instead of a list of all the files.\r\n", "ok\r\n" ]
"2021-12-13T14:27:57Z"
"2021-12-13T14:53:33Z"
null
CONTRIBUTOR
null
null
null
## Steps to reproduce the bug ```python from datasets import get_dataset_config_names get_dataset_config_names("allenai/c4") ``` ## Expected results I would expect to get the answer quickly, at least in less than 10s ## Actual results It takes about 45s on my environment ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3425/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3425/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2569/comments
https://api.github.com/repos/huggingface/datasets/issues/2569/events
https://github.com/huggingface/datasets/issues/2569
933,015,797
MDU6SXNzdWU5MzMwMTU3OTc=
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
{ "avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4", "events_url": "https://api.github.com/users/suzyahyah/events{/privacy}", "followers_url": "https://api.github.com/users/suzyahyah/followers", "following_url": "https://api.github.com/users/suzyahyah/following{/other_user}", "gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/suzyahyah", "id": 2980993, "login": "suzyahyah", "node_id": "MDQ6VXNlcjI5ODA5OTM=", "organizations_url": "https://api.github.com/users/suzyahyah/orgs", "received_events_url": "https://api.github.com/users/suzyahyah/received_events", "repos_url": "https://api.github.com/users/suzyahyah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions", "type": "User", "url": "https://api.github.com/users/suzyahyah" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.layer_norm.weight']\r\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nIn this case, this behavior IS expected and you can safely ignore the warning message.\r\n\r\nThe reason is that you are just using RoBERTa to get the contextual embeddings of the input sentences/tokens, thus leaving away its head layer, whose weights are ignored.\r\n\r\nFeel free to reopen this issue if you need further explanations.", "Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case)." ]
"2021-06-29T18:55:23Z"
"2021-07-01T07:08:59Z"
"2021-06-30T07:35:49Z"
NONE
null
null
null
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html ``` from datasets import load_metric metric = load_metric('bertscore') # Example of typical usage for batch in dataset: inputs, references = batch predictions = model(inputs) metric.add_batch(predictions=predictions, references=references) score = metric.compute(lang="en") #score = metric.compute(model_type="roberta-large") # gives the same error ``` I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2569/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2569/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1255/comments
https://api.github.com/repos/huggingface/datasets/issues/1255/events
https://github.com/huggingface/datasets/pull/1255
758,530,243
MDExOlB1bGxSZXF1ZXN0NTMzNjg4Njg2
1,255
[doc] nlp/viewer ➡️datasets/viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
null
[]
null
[]
"2020-12-07T13:58:41Z"
"2020-12-08T17:17:54Z"
"2020-12-08T17:17:53Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1255.diff", "html_url": "https://github.com/huggingface/datasets/pull/1255", "merged_at": "2020-12-08T17:17:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1255.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1255" }
cc @srush
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1255/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1255/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6414/comments
https://api.github.com/repos/huggingface/datasets/issues/6414/events
https://github.com/huggingface/datasets/pull/6414
1,992,482,491
PR_kwDODunzps5fZZ2l
6,414
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008434 / 0.011353 (-0.002919) | 0.006755 / 0.011008 (-0.004253) | 0.106169 / 0.038508 (0.067661) | 0.049329 / 0.023109 (0.026220) | 0.433610 / 0.275898 (0.157712) | 0.441993 / 0.323480 (0.118513) | 0.004703 / 0.007986 (-0.003282) | 0.006996 / 0.004328 (0.002667) | 0.080330 / 0.004250 (0.076080) | 0.066098 / 0.037052 (0.029045) | 0.435444 / 0.258489 (0.176955) | 0.490442 / 0.293841 (0.196601) | 0.047050 / 0.128546 (-0.081496) | 0.014520 / 0.075646 (-0.061127) | 0.339805 / 0.419271 (-0.079467) | 0.101161 / 0.043533 (0.057629) | 0.423236 / 0.255139 (0.168097) | 0.455627 / 0.283200 (0.172427) | 0.036218 / 0.141683 (-0.105465) | 1.766128 / 1.452155 (0.313973) | 1.923919 / 1.492716 (0.431203) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242939 / 0.018006 (0.224933) | 0.515582 / 0.000490 (0.515093) | 0.020271 / 0.000200 (0.020071) | 0.000383 / 0.000054 (0.000328) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030927 / 0.037411 (-0.006484) | 0.093951 / 0.014526 (0.079425) | 0.109028 / 0.176557 (-0.067529) | 0.174947 / 0.737135 (-0.562188) | 0.120538 / 0.296338 (-0.175800) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.553884 / 0.215209 (0.338675) | 5.424566 / 2.077655 (3.346911) | 2.439420 / 1.504120 (0.935301) | 2.019324 / 1.541195 (0.478129) | 2.170781 / 1.468490 (0.702290) | 0.924424 / 4.584777 (-3.660353) | 5.706029 / 3.745712 (1.960317) | 5.096911 / 5.269862 (-0.172951) | 3.168261 / 4.565676 (-1.397416) | 0.094336 / 0.424275 (-0.329940) | 0.015899 / 0.007607 (0.008292) | 0.709684 / 0.226044 (0.483639) | 7.476865 / 2.268929 (5.207936) | 3.350983 / 55.444624 (-52.093641) | 2.653419 / 6.876477 (-4.223058) | 2.802201 / 2.142072 (0.660129) | 1.081442 / 4.805227 (-3.723785) | 0.217025 / 6.500664 (-6.283639) | 0.077248 / 0.075469 (0.001779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598621 / 1.841788 (-0.243167) | 23.490338 / 8.074308 (15.416030) | 21.853488 / 10.191392 (11.662096) | 0.209625 / 0.680424 (-0.470799) | 0.028166 / 0.534201 (-0.506035) | 0.473883 / 0.579283 (-0.105400) | 0.584226 / 0.434364 (0.149862) | 0.538605 / 0.540337 (-0.001732) | 0.837060 / 1.386936 (-0.549876) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009029 / 0.011353 (-0.002324) | 0.004945 / 0.011008 (-0.006063) | 0.084539 / 0.038508 (0.046031) | 0.081014 / 0.023109 (0.057905) | 0.431291 / 0.275898 (0.155393) | 0.478913 / 0.323480 (0.155433) | 0.006107 / 0.007986 (-0.001879) | 0.003939 / 0.004328 (-0.000390) | 0.079932 / 0.004250 (0.075682) | 0.057936 / 0.037052 (0.020884) | 0.437295 / 0.258489 (0.178806) | 0.489790 / 0.293841 (0.195949) | 0.049544 / 0.128546 (-0.079003) | 0.013675 / 0.075646 (-0.061972) | 0.093143 / 0.419271 (-0.326128) | 0.064104 / 0.043533 (0.020571) | 0.444699 / 0.255139 (0.189560) | 0.443688 / 0.283200 (0.160489) | 0.034331 / 0.141683 (-0.107352) | 1.753014 / 1.452155 (0.300859) | 1.877274 / 1.492716 (0.384558) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250460 / 0.018006 (0.232454) | 0.527241 / 0.000490 (0.526752) | 0.007679 / 0.000200 (0.007479) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033269 / 0.037411 (-0.004142) | 0.111262 / 0.014526 (0.096736) | 0.133503 / 0.176557 (-0.043053) | 0.177998 / 0.737135 (-0.559137) | 0.117899 / 0.296338 (-0.178440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633588 / 0.215209 (0.418379) | 6.105283 / 2.077655 (4.027628) | 2.779309 / 1.504120 (1.275189) | 2.445788 / 1.541195 (0.904594) | 2.396443 / 1.468490 (0.927953) | 0.925928 / 4.584777 (-3.658849) | 5.266142 / 3.745712 (1.520430) | 4.868830 / 5.269862 (-0.401031) | 2.998768 / 4.565676 (-1.566909) | 0.103135 / 0.424275 (-0.321140) | 0.008059 / 0.007607 (0.000452) | 0.753159 / 0.226044 (0.527115) | 7.532170 / 2.268929 (5.263242) | 3.563941 / 55.444624 (-51.880683) | 2.829208 / 6.876477 (-4.047269) | 2.913954 / 2.142072 (0.771881) | 1.085843 / 4.805227 (-3.719384) | 0.214195 / 6.500664 (-6.286469) | 0.071509 / 0.075469 (-0.003960) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544819 / 1.841788 (-0.296968) | 23.790149 / 8.074308 (15.715841) | 23.086019 / 10.191392 (12.894627) | 0.242695 / 0.680424 (-0.437729) | 0.041706 / 0.534201 (-0.492495) | 0.552402 / 0.579283 (-0.026881) | 0.652518 / 0.434364 (0.218154) | 0.581876 / 0.540337 (0.041539) | 0.795425 / 1.386936 (-0.591511) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#117fdfccc8523fe150521ad74e478459fe2f297c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004573 / 0.011353 (-0.006780) | 0.002965 / 0.011008 (-0.008043) | 0.061913 / 0.038508 (0.023405) | 0.029474 / 0.023109 (0.006365) | 0.258117 / 0.275898 (-0.017781) | 0.279854 / 0.323480 (-0.043626) | 0.003954 / 0.007986 (-0.004031) | 0.002479 / 0.004328 (-0.001850) | 0.048685 / 0.004250 (0.044434) | 0.044733 / 0.037052 (0.007681) | 0.256659 / 0.258489 (-0.001830) | 0.285235 / 0.293841 (-0.008606) | 0.023566 / 0.128546 (-0.104981) | 0.007291 / 0.075646 (-0.068355) | 0.202701 / 0.419271 (-0.216570) | 0.055706 / 0.043533 (0.012173) | 0.258790 / 0.255139 (0.003651) | 0.278675 / 0.283200 (-0.004525) | 0.018574 / 0.141683 (-0.123109) | 1.109359 / 1.452155 (-0.342796) | 1.184434 / 1.492716 (-0.308282) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095048 / 0.018006 (0.077042) | 0.305027 / 0.000490 (0.304537) | 0.000310 / 0.000200 (0.000110) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018183 / 0.037411 (-0.019228) | 0.066130 / 0.014526 (0.051604) | 0.073948 / 0.176557 (-0.102608) | 0.120458 / 0.737135 (-0.616678) | 0.075995 / 0.296338 (-0.220343) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279419 / 0.215209 (0.064210) | 2.728591 / 2.077655 (0.650936) | 1.439016 / 1.504120 (-0.065104) | 1.325798 / 1.541195 (-0.215397) | 1.352050 / 1.468490 (-0.116440) | 0.395041 / 4.584777 (-4.189736) | 2.377651 / 3.745712 (-1.368061) | 2.618473 / 5.269862 (-2.651389) | 1.587580 / 4.565676 (-2.978096) | 0.045910 / 0.424275 (-0.378365) | 0.004843 / 0.007607 (-0.002764) | 0.335491 / 0.226044 (0.109447) | 3.378441 / 2.268929 (1.109512) | 1.827757 / 55.444624 (-53.616868) | 1.502360 / 6.876477 (-5.374117) | 1.508460 / 2.142072 (-0.633612) | 0.471309 / 4.805227 (-4.333918) | 0.098934 / 6.500664 (-6.401730) | 0.041705 / 0.075469 (-0.033764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945067 / 1.841788 (-0.896720) | 11.548209 / 8.074308 (3.473900) | 10.422628 / 10.191392 (0.231236) | 0.141494 / 0.680424 (-0.538929) | 0.014345 / 0.534201 (-0.519856) | 0.267750 / 0.579283 (-0.311533) | 0.261488 / 0.434364 (-0.172876) | 0.307192 / 0.540337 (-0.233145) | 0.427926 / 1.386936 (-0.959010) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004831 / 0.011353 (-0.006522) | 0.002876 / 0.011008 (-0.008132) | 0.048629 / 0.038508 (0.010121) | 0.055090 / 0.023109 (0.031981) | 0.271381 / 0.275898 (-0.004517) | 0.292350 / 0.323480 (-0.031130) | 0.004001 / 0.007986 (-0.003985) | 0.002389 / 0.004328 (-0.001939) | 0.047527 / 0.004250 (0.043277) | 0.038065 / 0.037052 (0.001012) | 0.277387 / 0.258489 (0.018898) | 0.307209 / 0.293841 (0.013368) | 0.025136 / 0.128546 (-0.103411) | 0.007309 / 0.075646 (-0.068338) | 0.054483 / 0.419271 (-0.364789) | 0.032807 / 0.043533 (-0.010726) | 0.274364 / 0.255139 (0.019225) | 0.290280 / 0.283200 (0.007080) | 0.017855 / 0.141683 (-0.123828) | 1.185912 / 1.452155 (-0.266243) | 1.228141 / 1.492716 (-0.264576) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094787 / 0.018006 (0.076781) | 0.314191 / 0.000490 (0.313701) | 0.000217 / 0.000200 (0.000017) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020920 / 0.037411 (-0.016491) | 0.070446 / 0.014526 (0.055920) | 0.081371 / 0.176557 (-0.095186) | 0.119127 / 0.737135 (-0.618009) | 0.085658 / 0.296338 (-0.210680) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290601 / 0.215209 (0.075392) | 2.874091 / 2.077655 (0.796436) | 1.598934 / 1.504120 (0.094814) | 1.464329 / 1.541195 (-0.076866) | 1.504943 / 1.468490 (0.036453) | 0.410457 / 4.584777 (-4.174320) | 2.428706 / 3.745712 (-1.317006) | 2.596510 / 5.269862 (-2.673352) | 1.547084 / 4.565676 (-3.018592) | 0.047546 / 0.424275 (-0.376729) | 0.004740 / 0.007607 (-0.002867) | 0.351168 / 0.226044 (0.125123) | 3.424554 / 2.268929 (1.155626) | 1.969792 / 55.444624 (-53.474832) | 1.676731 / 6.876477 (-5.199745) | 1.668769 / 2.142072 (-0.473304) | 0.482486 / 4.805227 (-4.322741) | 0.100018 / 6.500664 (-6.400646) | 0.040956 / 0.075469 (-0.034513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.966306 / 1.841788 (-0.875482) | 12.158909 / 8.074308 (4.084601) | 10.926447 / 10.191392 (0.735055) | 0.130359 / 0.680424 (-0.550065) | 0.016162 / 0.534201 (-0.518039) | 0.269977 / 0.579283 (-0.309306) | 0.283366 / 0.434364 (-0.150997) | 0.304517 / 0.540337 (-0.235821) | 0.410398 / 1.386936 (-0.976539) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d5d6e57913465c22bb8074b0c0f968252cb12b \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004686 / 0.011353 (-0.006667) | 0.002764 / 0.011008 (-0.008244) | 0.061411 / 0.038508 (0.022902) | 0.030450 / 0.023109 (0.007341) | 0.247648 / 0.275898 (-0.028250) | 0.278033 / 0.323480 (-0.045447) | 0.002903 / 0.007986 (-0.005082) | 0.002350 / 0.004328 (-0.001979) | 0.047514 / 0.004250 (0.043264) | 0.044446 / 0.037052 (0.007393) | 0.256170 / 0.258489 (-0.002319) | 0.285977 / 0.293841 (-0.007864) | 0.023407 / 0.128546 (-0.105139) | 0.007223 / 0.075646 (-0.068423) | 0.201274 / 0.419271 (-0.217997) | 0.054022 / 0.043533 (0.010489) | 0.253841 / 0.255139 (-0.001298) | 0.278219 / 0.283200 (-0.004980) | 0.017796 / 0.141683 (-0.123886) | 1.105950 / 1.452155 (-0.346205) | 1.182021 / 1.492716 (-0.310695) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089584 / 0.018006 (0.071578) | 0.299338 / 0.000490 (0.298849) | 0.000202 / 0.000200 (0.000003) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018974 / 0.037411 (-0.018437) | 0.062352 / 0.014526 (0.047826) | 0.073667 / 0.176557 (-0.102889) | 0.119225 / 0.737135 (-0.617911) | 0.075393 / 0.296338 (-0.220945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282749 / 0.215209 (0.067540) | 2.795822 / 2.077655 (0.718167) | 1.492946 / 1.504120 (-0.011174) | 1.382340 / 1.541195 (-0.158855) | 1.377281 / 1.468490 (-0.091209) | 0.397361 / 4.584777 (-4.187415) | 2.379416 / 3.745712 (-1.366296) | 2.552967 / 5.269862 (-2.716895) | 1.546347 / 4.565676 (-3.019330) | 0.045851 / 0.424275 (-0.378424) | 0.004830 / 0.007607 (-0.002777) | 0.351194 / 0.226044 (0.125150) | 3.407406 / 2.268929 (1.138478) | 1.852983 / 55.444624 (-53.591641) | 1.536381 / 6.876477 (-5.340095) | 1.542786 / 2.142072 (-0.599287) | 0.471960 / 4.805227 (-4.333267) | 0.098336 / 6.500664 (-6.402328) | 0.041569 / 0.075469 (-0.033900) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.912718 / 1.841788 (-0.929070) | 11.339404 / 8.074308 (3.265095) | 10.480593 / 10.191392 (0.289201) | 0.139508 / 0.680424 (-0.540916) | 0.014210 / 0.534201 (-0.519991) | 0.268152 / 0.579283 (-0.311131) | 0.260503 / 0.434364 (-0.173860) | 0.304735 / 0.540337 (-0.235602) | 0.422155 / 1.386936 (-0.964781) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004714 / 0.011353 (-0.006638) | 0.002638 / 0.011008 (-0.008370) | 0.047967 / 0.038508 (0.009459) | 0.050758 / 0.023109 (0.027649) | 0.265619 / 0.275898 (-0.010279) | 0.286920 / 0.323480 (-0.036560) | 0.003936 / 0.007986 (-0.004050) | 0.002351 / 0.004328 (-0.001977) | 0.047642 / 0.004250 (0.043392) | 0.038412 / 0.037052 (0.001360) | 0.269561 / 0.258489 (0.011072) | 0.302057 / 0.293841 (0.008216) | 0.023893 / 0.128546 (-0.104653) | 0.006793 / 0.075646 (-0.068854) | 0.053091 / 0.419271 (-0.366180) | 0.032228 / 0.043533 (-0.011305) | 0.267110 / 0.255139 (0.011971) | 0.287211 / 0.283200 (0.004011) | 0.017945 / 0.141683 (-0.123738) | 1.191770 / 1.452155 (-0.260384) | 1.269644 / 1.492716 (-0.223072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088067 / 0.018006 (0.070061) | 0.298383 / 0.000490 (0.297893) | 0.000202 / 0.000200 (0.000002) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020685 / 0.037411 (-0.016726) | 0.069883 / 0.014526 (0.055357) | 0.080107 / 0.176557 (-0.096450) | 0.119311 / 0.737135 (-0.617825) | 0.080791 / 0.296338 (-0.215548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295781 / 0.215209 (0.080572) | 2.905536 / 2.077655 (0.827881) | 1.579184 / 1.504120 (0.075064) | 1.475937 / 1.541195 (-0.065258) | 1.533708 / 1.468490 (0.065218) | 0.409851 / 4.584777 (-4.174926) | 2.443217 / 3.745712 (-1.302496) | 2.543980 / 5.269862 (-2.725882) | 1.512187 / 4.565676 (-3.053489) | 0.046390 / 0.424275 (-0.377885) | 0.004762 / 0.007607 (-0.002845) | 0.345066 / 0.226044 (0.119021) | 3.485133 / 2.268929 (1.216204) | 1.954690 / 55.444624 (-53.489934) | 1.671104 / 6.876477 (-5.205372) | 1.655330 / 2.142072 (-0.486743) | 0.487910 / 4.805227 (-4.317317) | 0.097707 / 6.500664 (-6.402957) | 0.040379 / 0.075469 (-0.035090) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981620 / 1.841788 (-0.860168) | 11.806530 / 8.074308 (3.732222) | 10.868275 / 10.191392 (0.676883) | 0.141230 / 0.680424 (-0.539194) | 0.015785 / 0.534201 (-0.518416) | 0.271416 / 0.579283 (-0.307867) | 0.276048 / 0.434364 (-0.158316) | 0.310988 / 0.540337 (-0.229349) | 0.410078 / 1.386936 (-0.976858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ec565740dee10c466ade16f81dee2783e442ba55 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004803 / 0.011353 (-0.006550) | 0.002961 / 0.011008 (-0.008047) | 0.061431 / 0.038508 (0.022923) | 0.030189 / 0.023109 (0.007080) | 0.255755 / 0.275898 (-0.020143) | 0.277841 / 0.323480 (-0.045639) | 0.003083 / 0.007986 (-0.004902) | 0.002432 / 0.004328 (-0.001896) | 0.047674 / 0.004250 (0.043424) | 0.045066 / 0.037052 (0.008014) | 0.268701 / 0.258489 (0.010211) | 0.286673 / 0.293841 (-0.007168) | 0.023663 / 0.128546 (-0.104883) | 0.007148 / 0.075646 (-0.068499) | 0.201962 / 0.419271 (-0.217310) | 0.054953 / 0.043533 (0.011420) | 0.257155 / 0.255139 (0.002016) | 0.277769 / 0.283200 (-0.005431) | 0.017803 / 0.141683 (-0.123880) | 1.100270 / 1.452155 (-0.351884) | 1.146975 / 1.492716 (-0.345741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092776 / 0.018006 (0.074770) | 0.303786 / 0.000490 (0.303296) | 0.000237 / 0.000200 (0.000037) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019647 / 0.037411 (-0.017765) | 0.063211 / 0.014526 (0.048686) | 0.076684 / 0.176557 (-0.099873) | 0.121952 / 0.737135 (-0.615184) | 0.077202 / 0.296338 (-0.219137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282087 / 0.215209 (0.066878) | 2.789204 / 2.077655 (0.711550) | 1.510376 / 1.504120 (0.006256) | 1.384241 / 1.541195 (-0.156954) | 1.414949 / 1.468490 (-0.053541) | 0.402206 / 4.584777 (-4.182570) | 2.377601 / 3.745712 (-1.368111) | 2.585354 / 5.269862 (-2.684508) | 1.592937 / 4.565676 (-2.972740) | 0.045217 / 0.424275 (-0.379058) | 0.004772 / 0.007607 (-0.002835) | 0.339584 / 0.226044 (0.113539) | 3.373184 / 2.268929 (1.104256) | 1.855196 / 55.444624 (-53.589428) | 1.599559 / 6.876477 (-5.276918) | 1.604421 / 2.142072 (-0.537651) | 0.467754 / 4.805227 (-4.337474) | 0.098244 / 6.500664 (-6.402420) | 0.042631 / 0.075469 (-0.032838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.947680 / 1.841788 (-0.894108) | 11.539875 / 8.074308 (3.465567) | 10.340830 / 10.191392 (0.149438) | 0.145591 / 0.680424 (-0.534833) | 0.014367 / 0.534201 (-0.519834) | 0.270506 / 0.579283 (-0.308777) | 0.268825 / 0.434364 (-0.165539) | 0.308372 / 0.540337 (-0.231966) | 0.425039 / 1.386936 (-0.961897) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004813 / 0.011353 (-0.006540) | 0.002931 / 0.011008 (-0.008078) | 0.047997 / 0.038508 (0.009489) | 0.050753 / 0.023109 (0.027644) | 0.272704 / 0.275898 (-0.003194) | 0.294045 / 0.323480 (-0.029435) | 0.004059 / 0.007986 (-0.003927) | 0.002491 / 0.004328 (-0.001838) | 0.047621 / 0.004250 (0.043371) | 0.038824 / 0.037052 (0.001772) | 0.275322 / 0.258489 (0.016833) | 0.306447 / 0.293841 (0.012606) | 0.024402 / 0.128546 (-0.104145) | 0.007252 / 0.075646 (-0.068394) | 0.053346 / 0.419271 (-0.365925) | 0.032224 / 0.043533 (-0.011309) | 0.271468 / 0.255139 (0.016329) | 0.289429 / 0.283200 (0.006229) | 0.018285 / 0.141683 (-0.123398) | 1.116743 / 1.452155 (-0.335412) | 1.182724 / 1.492716 (-0.309993) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091899 / 0.018006 (0.073893) | 0.299161 / 0.000490 (0.298671) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021823 / 0.037411 (-0.015588) | 0.071227 / 0.014526 (0.056701) | 0.080503 / 0.176557 (-0.096053) | 0.120243 / 0.737135 (-0.616892) | 0.082328 / 0.296338 (-0.214010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.324951 / 0.215209 (0.109742) | 2.842358 / 2.077655 (0.764703) | 1.602317 / 1.504120 (0.098197) | 1.481103 / 1.541195 (-0.060091) | 1.497557 / 1.468490 (0.029067) | 0.406523 / 4.584777 (-4.178254) | 2.402743 / 3.745712 (-1.342970) | 2.545435 / 5.269862 (-2.724427) | 1.534071 / 4.565676 (-3.031605) | 0.046914 / 0.424275 (-0.377361) | 0.004728 / 0.007607 (-0.002879) | 0.341544 / 0.226044 (0.115499) | 3.412017 / 2.268929 (1.143089) | 1.937442 / 55.444624 (-53.507182) | 1.668774 / 6.876477 (-5.207703) | 1.668908 / 2.142072 (-0.473165) | 0.477398 / 4.805227 (-4.327829) | 0.098531 / 6.500664 (-6.402133) | 0.041077 / 0.075469 (-0.034392) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983888 / 1.841788 (-0.857900) | 12.072703 / 8.074308 (3.998395) | 11.028622 / 10.191392 (0.837230) | 0.148097 / 0.680424 (-0.532327) | 0.015869 / 0.534201 (-0.518332) | 0.267609 / 0.579283 (-0.311674) | 0.272345 / 0.434364 (-0.162019) | 0.303840 / 0.540337 (-0.236497) | 0.409199 / 1.386936 (-0.977737) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1487df064580bd23458234fab2e85876d9364e03 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005016 / 0.011353 (-0.006337) | 0.002931 / 0.011008 (-0.008077) | 0.062142 / 0.038508 (0.023634) | 0.030758 / 0.023109 (0.007648) | 0.251689 / 0.275898 (-0.024209) | 0.272114 / 0.323480 (-0.051366) | 0.004102 / 0.007986 (-0.003884) | 0.002500 / 0.004328 (-0.001828) | 0.049187 / 0.004250 (0.044937) | 0.047150 / 0.037052 (0.010098) | 0.256497 / 0.258489 (-0.001992) | 0.288069 / 0.293841 (-0.005772) | 0.023915 / 0.128546 (-0.104632) | 0.007204 / 0.075646 (-0.068442) | 0.204257 / 0.419271 (-0.215015) | 0.063879 / 0.043533 (0.020346) | 0.253008 / 0.255139 (-0.002131) | 0.266554 / 0.283200 (-0.016645) | 0.018929 / 0.141683 (-0.122754) | 1.140547 / 1.452155 (-0.311608) | 1.197049 / 1.492716 (-0.295668) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094111 / 0.018006 (0.076105) | 0.301618 / 0.000490 (0.301128) | 0.000219 / 0.000200 (0.000019) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018614 / 0.037411 (-0.018797) | 0.062426 / 0.014526 (0.047900) | 0.073079 / 0.176557 (-0.103477) | 0.120313 / 0.737135 (-0.616823) | 0.076445 / 0.296338 (-0.219894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285151 / 0.215209 (0.069942) | 2.754272 / 2.077655 (0.676617) | 1.485254 / 1.504120 (-0.018866) | 1.368412 / 1.541195 (-0.172783) | 1.402819 / 1.468490 (-0.065671) | 0.396561 / 4.584777 (-4.188216) | 2.375708 / 3.745712 (-1.370004) | 2.656088 / 5.269862 (-2.613773) | 1.588676 / 4.565676 (-2.977001) | 0.048662 / 0.424275 (-0.375613) | 0.004963 / 0.007607 (-0.002644) | 0.339747 / 0.226044 (0.113702) | 3.315841 / 2.268929 (1.046912) | 1.841439 / 55.444624 (-53.603186) | 1.547803 / 6.876477 (-5.328674) | 1.601872 / 2.142072 (-0.540200) | 0.468637 / 4.805227 (-4.336591) | 0.099423 / 6.500664 (-6.401241) | 0.041926 / 0.075469 (-0.033543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933058 / 1.841788 (-0.908730) | 11.680870 / 8.074308 (3.606561) | 10.239009 / 10.191392 (0.047617) | 0.129974 / 0.680424 (-0.550450) | 0.014081 / 0.534201 (-0.520120) | 0.273076 / 0.579283 (-0.306207) | 0.261914 / 0.434364 (-0.172450) | 0.305982 / 0.540337 (-0.234356) | 0.430623 / 1.386936 (-0.956313) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004969 / 0.011353 (-0.006384) | 0.003084 / 0.011008 (-0.007924) | 0.048686 / 0.038508 (0.010178) | 0.057234 / 0.023109 (0.034125) | 0.295408 / 0.275898 (0.019510) | 0.323774 / 0.323480 (0.000294) | 0.004014 / 0.007986 (-0.003972) | 0.002423 / 0.004328 (-0.001905) | 0.048000 / 0.004250 (0.043749) | 0.039872 / 0.037052 (0.002820) | 0.294717 / 0.258489 (0.036228) | 0.331149 / 0.293841 (0.037309) | 0.027884 / 0.128546 (-0.100662) | 0.007155 / 0.075646 (-0.068491) | 0.053812 / 0.419271 (-0.365460) | 0.032483 / 0.043533 (-0.011050) | 0.293402 / 0.255139 (0.038263) | 0.312553 / 0.283200 (0.029354) | 0.017848 / 0.141683 (-0.123835) | 1.125600 / 1.452155 (-0.326554) | 1.189469 / 1.492716 (-0.303248) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096198 / 0.018006 (0.078191) | 0.305096 / 0.000490 (0.304607) | 0.000229 / 0.000200 (0.000029) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021992 / 0.037411 (-0.015419) | 0.072082 / 0.014526 (0.057556) | 0.082704 / 0.176557 (-0.093853) | 0.124512 / 0.737135 (-0.612624) | 0.084541 / 0.296338 (-0.211797) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296440 / 0.215209 (0.081231) | 2.923392 / 2.077655 (0.845738) | 1.599057 / 1.504120 (0.094937) | 1.480473 / 1.541195 (-0.060722) | 1.551837 / 1.468490 (0.083347) | 0.418618 / 4.584777 (-4.166159) | 2.472727 / 3.745712 (-1.272985) | 2.796141 / 5.269862 (-2.473721) | 1.629139 / 4.565676 (-2.936538) | 0.047703 / 0.424275 (-0.376572) | 0.004971 / 0.007607 (-0.002636) | 0.354453 / 0.226044 (0.128408) | 3.514861 / 2.268929 (1.245932) | 1.993597 / 55.444624 (-53.451028) | 1.694386 / 6.876477 (-5.182090) | 1.748562 / 2.142072 (-0.393510) | 0.487158 / 4.805227 (-4.318070) | 0.102021 / 6.500664 (-6.398643) | 0.042648 / 0.075469 (-0.032821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974950 / 1.841788 (-0.866837) | 13.391204 / 8.074308 (5.316896) | 11.474696 / 10.191392 (1.283304) | 0.142618 / 0.680424 (-0.537806) | 0.016163 / 0.534201 (-0.518038) | 0.271453 / 0.579283 (-0.307830) | 0.287049 / 0.434364 (-0.147315) | 0.309069 / 0.540337 (-0.231268) | 0.417117 / 1.386936 (-0.969819) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35a3422cfcebfef5b09ae70c22843ffadaf44c46 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004974 / 0.011353 (-0.006379) | 0.002950 / 0.011008 (-0.008058) | 0.061856 / 0.038508 (0.023348) | 0.030539 / 0.023109 (0.007429) | 0.250105 / 0.275898 (-0.025793) | 0.276687 / 0.323480 (-0.046793) | 0.003077 / 0.007986 (-0.004908) | 0.002412 / 0.004328 (-0.001916) | 0.048336 / 0.004250 (0.044086) | 0.045849 / 0.037052 (0.008797) | 0.251757 / 0.258489 (-0.006732) | 0.284914 / 0.293841 (-0.008927) | 0.024033 / 0.128546 (-0.104513) | 0.007343 / 0.075646 (-0.068303) | 0.202867 / 0.419271 (-0.216405) | 0.061294 / 0.043533 (0.017762) | 0.263590 / 0.255139 (0.008451) | 0.272744 / 0.283200 (-0.010455) | 0.019613 / 0.141683 (-0.122070) | 1.104263 / 1.452155 (-0.347892) | 1.164128 / 1.492716 (-0.328588) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094261 / 0.018006 (0.076255) | 0.303340 / 0.000490 (0.302850) | 0.000215 / 0.000200 (0.000015) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.062727 / 0.014526 (0.048201) | 0.074955 / 0.176557 (-0.101602) | 0.124810 / 0.737135 (-0.612326) | 0.074335 / 0.296338 (-0.222004) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279368 / 0.215209 (0.064159) | 2.721641 / 2.077655 (0.643986) | 1.510773 / 1.504120 (0.006653) | 1.364349 / 1.541195 (-0.176845) | 1.386044 / 1.468490 (-0.082446) | 0.403051 / 4.584777 (-4.181726) | 2.416525 / 3.745712 (-1.329187) | 2.623198 / 5.269862 (-2.646663) | 1.560869 / 4.565676 (-3.004808) | 0.046613 / 0.424275 (-0.377662) | 0.004861 / 0.007607 (-0.002746) | 0.337875 / 0.226044 (0.111830) | 3.289956 / 2.268929 (1.021028) | 1.851707 / 55.444624 (-53.592917) | 1.571092 / 6.876477 (-5.305385) | 1.600328 / 2.142072 (-0.541745) | 0.480766 / 4.805227 (-4.324461) | 0.099138 / 6.500664 (-6.401526) | 0.041691 / 0.075469 (-0.033779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.941162 / 1.841788 (-0.900626) | 11.745335 / 8.074308 (3.671027) | 10.645509 / 10.191392 (0.454117) | 0.132506 / 0.680424 (-0.547918) | 0.015192 / 0.534201 (-0.519009) | 0.272483 / 0.579283 (-0.306800) | 0.270269 / 0.434364 (-0.164094) | 0.309580 / 0.540337 (-0.230758) | 0.431513 / 1.386936 (-0.955423) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003069 / 0.011008 (-0.007939) | 0.048605 / 0.038508 (0.010097) | 0.059557 / 0.023109 (0.036448) | 0.275092 / 0.275898 (-0.000806) | 0.298910 / 0.323480 (-0.024570) | 0.004198 / 0.007986 (-0.003788) | 0.002499 / 0.004328 (-0.001830) | 0.048248 / 0.004250 (0.043997) | 0.040302 / 0.037052 (0.003249) | 0.279539 / 0.258489 (0.021050) | 0.312500 / 0.293841 (0.018659) | 0.025407 / 0.128546 (-0.103140) | 0.007364 / 0.075646 (-0.068282) | 0.053086 / 0.419271 (-0.366186) | 0.033291 / 0.043533 (-0.010242) | 0.276521 / 0.255139 (0.021382) | 0.292943 / 0.283200 (0.009743) | 0.019416 / 0.141683 (-0.122267) | 1.151734 / 1.452155 (-0.300421) | 1.205021 / 1.492716 (-0.287695) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094112 / 0.018006 (0.076106) | 0.309534 / 0.000490 (0.309044) | 0.000219 / 0.000200 (0.000019) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021539 / 0.037411 (-0.015872) | 0.070325 / 0.014526 (0.055799) | 0.080468 / 0.176557 (-0.096089) | 0.121095 / 0.737135 (-0.616040) | 0.082008 / 0.296338 (-0.214331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302591 / 0.215209 (0.087382) | 2.943475 / 2.077655 (0.865820) | 1.597970 / 1.504120 (0.093850) | 1.468774 / 1.541195 (-0.072421) | 1.504812 / 1.468490 (0.036322) | 0.413715 / 4.584777 (-4.171062) | 2.418319 / 3.745712 (-1.327393) | 2.616656 / 5.269862 (-2.653206) | 1.558165 / 4.565676 (-3.007512) | 0.047169 / 0.424275 (-0.377106) | 0.004761 / 0.007607 (-0.002846) | 0.347225 / 0.226044 (0.121180) | 3.479624 / 2.268929 (1.210696) | 1.961253 / 55.444624 (-53.483371) | 1.673532 / 6.876477 (-5.202944) | 1.698900 / 2.142072 (-0.443172) | 0.488373 / 4.805227 (-4.316855) | 0.098322 / 6.500664 (-6.402342) | 0.040832 / 0.075469 (-0.034637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.009133 / 1.841788 (-0.832655) | 13.373258 / 8.074308 (5.298949) | 11.327360 / 10.191392 (1.135968) | 0.135778 / 0.680424 (-0.544646) | 0.015813 / 0.534201 (-0.518388) | 0.275404 / 0.579283 (-0.303879) | 0.282564 / 0.434364 (-0.151799) | 0.311830 / 0.540337 (-0.228507) | 0.419008 / 1.386936 (-0.967928) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4592709e5399f91b5b392f4fd73687985365c909 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004899 / 0.011353 (-0.006454) | 0.002780 / 0.011008 (-0.008229) | 0.061997 / 0.038508 (0.023489) | 0.029909 / 0.023109 (0.006800) | 0.233445 / 0.275898 (-0.042453) | 0.254128 / 0.323480 (-0.069351) | 0.002927 / 0.007986 (-0.005058) | 0.002396 / 0.004328 (-0.001932) | 0.048118 / 0.004250 (0.043868) | 0.044520 / 0.037052 (0.007468) | 0.237594 / 0.258489 (-0.020895) | 0.268407 / 0.293841 (-0.025434) | 0.023517 / 0.128546 (-0.105029) | 0.007035 / 0.075646 (-0.068612) | 0.202803 / 0.419271 (-0.216469) | 0.057692 / 0.043533 (0.014159) | 0.237058 / 0.255139 (-0.018081) | 0.252966 / 0.283200 (-0.030233) | 0.017934 / 0.141683 (-0.123748) | 1.096406 / 1.452155 (-0.355749) | 1.153509 / 1.492716 (-0.339207) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091812 / 0.018006 (0.073806) | 0.298410 / 0.000490 (0.297920) | 0.000228 / 0.000200 (0.000028) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018333 / 0.037411 (-0.019078) | 0.062685 / 0.014526 (0.048159) | 0.073295 / 0.176557 (-0.103261) | 0.119234 / 0.737135 (-0.617901) | 0.074603 / 0.296338 (-0.221736) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279078 / 0.215209 (0.063869) | 2.768535 / 2.077655 (0.690880) | 1.457049 / 1.504120 (-0.047071) | 1.326870 / 1.541195 (-0.214325) | 1.349657 / 1.468490 (-0.118833) | 0.405003 / 4.584777 (-4.179774) | 2.428726 / 3.745712 (-1.316986) | 2.595776 / 5.269862 (-2.674086) | 1.557879 / 4.565676 (-3.007797) | 0.045985 / 0.424275 (-0.378291) | 0.004854 / 0.007607 (-0.002753) | 0.336437 / 0.226044 (0.110392) | 3.317330 / 2.268929 (1.048401) | 1.784525 / 55.444624 (-53.660100) | 1.500295 / 6.876477 (-5.376182) | 1.529869 / 2.142072 (-0.612203) | 0.473426 / 4.805227 (-4.331801) | 0.099609 / 6.500664 (-6.401055) | 0.042054 / 0.075469 (-0.033415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.937154 / 1.841788 (-0.904633) | 11.482383 / 8.074308 (3.408075) | 10.468769 / 10.191392 (0.277377) | 0.132724 / 0.680424 (-0.547700) | 0.015242 / 0.534201 (-0.518959) | 0.281124 / 0.579283 (-0.298159) | 0.268603 / 0.434364 (-0.165761) | 0.311410 / 0.540337 (-0.228928) | 0.431817 / 1.386936 (-0.955119) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004695 / 0.011353 (-0.006658) | 0.002873 / 0.011008 (-0.008135) | 0.048133 / 0.038508 (0.009625) | 0.052505 / 0.023109 (0.029396) | 0.271679 / 0.275898 (-0.004219) | 0.292530 / 0.323480 (-0.030950) | 0.003844 / 0.007986 (-0.004142) | 0.002417 / 0.004328 (-0.001912) | 0.048619 / 0.004250 (0.044369) | 0.039152 / 0.037052 (0.002100) | 0.276575 / 0.258489 (0.018086) | 0.307836 / 0.293841 (0.013995) | 0.023877 / 0.128546 (-0.104669) | 0.006897 / 0.075646 (-0.068749) | 0.053241 / 0.419271 (-0.366031) | 0.032487 / 0.043533 (-0.011046) | 0.274205 / 0.255139 (0.019066) | 0.289701 / 0.283200 (0.006502) | 0.018250 / 0.141683 (-0.123432) | 1.137902 / 1.452155 (-0.314253) | 1.202043 / 1.492716 (-0.290673) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091453 / 0.018006 (0.073446) | 0.297032 / 0.000490 (0.296543) | 0.000224 / 0.000200 (0.000024) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021062 / 0.037411 (-0.016349) | 0.069848 / 0.014526 (0.055322) | 0.084337 / 0.176557 (-0.092219) | 0.119951 / 0.737135 (-0.617184) | 0.082805 / 0.296338 (-0.213533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297056 / 0.215209 (0.081846) | 2.890110 / 2.077655 (0.812456) | 1.609918 / 1.504120 (0.105798) | 1.491184 / 1.541195 (-0.050011) | 1.529433 / 1.468490 (0.060943) | 0.396081 / 4.584777 (-4.188696) | 2.408310 / 3.745712 (-1.337402) | 2.567905 / 5.269862 (-2.701957) | 1.514465 / 4.565676 (-3.051212) | 0.045329 / 0.424275 (-0.378946) | 0.004738 / 0.007607 (-0.002869) | 0.344373 / 0.226044 (0.118328) | 3.428333 / 2.268929 (1.159404) | 1.981401 / 55.444624 (-53.463223) | 1.688007 / 6.876477 (-5.188470) | 1.685542 / 2.142072 (-0.456531) | 0.478045 / 4.805227 (-4.327182) | 0.096664 / 6.500664 (-6.404001) | 0.040335 / 0.075469 (-0.035135) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972912 / 1.841788 (-0.868876) | 12.055045 / 8.074308 (3.980737) | 10.821073 / 10.191392 (0.629681) | 0.139177 / 0.680424 (-0.541247) | 0.015046 / 0.534201 (-0.519155) | 0.275670 / 0.579283 (-0.303613) | 0.280366 / 0.434364 (-0.153998) | 0.315781 / 0.540337 (-0.224556) | 0.424536 / 1.386936 (-0.962400) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0684b471d6ca8a235162f5575f624b6eda7956c5 \"CML watermark\")\n", "I'm finally merging as `transformers`/`tokenizers` dependency pins have been removed + `huggingface_hub 0.19.4` has fixed the deps incompatibility issue. All good now :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004435 / 0.011353 (-0.006918) | 0.002924 / 0.011008 (-0.008084) | 0.062159 / 0.038508 (0.023651) | 0.029639 / 0.023109 (0.006529) | 0.237470 / 0.275898 (-0.038428) | 0.269641 / 0.323480 (-0.053839) | 0.004124 / 0.007986 (-0.003862) | 0.002528 / 0.004328 (-0.001800) | 0.048114 / 0.004250 (0.043864) | 0.046055 / 0.037052 (0.009002) | 0.245844 / 0.258489 (-0.012645) | 0.278085 / 0.293841 (-0.015756) | 0.023152 / 0.128546 (-0.105394) | 0.007194 / 0.075646 (-0.068452) | 0.206493 / 0.419271 (-0.212778) | 0.055687 / 0.043533 (0.012155) | 0.243301 / 0.255139 (-0.011838) | 0.267645 / 0.283200 (-0.015555) | 0.017413 / 0.141683 (-0.124270) | 1.113071 / 1.452155 (-0.339083) | 1.201436 / 1.492716 (-0.291280) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092576 / 0.018006 (0.074570) | 0.303516 / 0.000490 (0.303027) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019108 / 0.037411 (-0.018303) | 0.062326 / 0.014526 (0.047800) | 0.073711 / 0.176557 (-0.102846) | 0.120414 / 0.737135 (-0.616721) | 0.075837 / 0.296338 (-0.220501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278267 / 0.215209 (0.063058) | 2.766231 / 2.077655 (0.688576) | 1.455613 / 1.504120 (-0.048507) | 1.337128 / 1.541195 (-0.204066) | 1.357659 / 1.468490 (-0.110831) | 0.404549 / 4.584777 (-4.180228) | 2.409084 / 3.745712 (-1.336628) | 2.645000 / 5.269862 (-2.624861) | 1.600475 / 4.565676 (-2.965201) | 0.046680 / 0.424275 (-0.377595) | 0.004887 / 0.007607 (-0.002720) | 0.340338 / 0.226044 (0.114294) | 3.332647 / 2.268929 (1.063719) | 1.852529 / 55.444624 (-53.592096) | 1.532442 / 6.876477 (-5.344035) | 1.550383 / 2.142072 (-0.591689) | 0.482702 / 4.805227 (-4.322525) | 0.101067 / 6.500664 (-6.399597) | 0.042132 / 0.075469 (-0.033337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945481 / 1.841788 (-0.896307) | 11.886240 / 8.074308 (3.811932) | 10.484620 / 10.191392 (0.293228) | 0.130906 / 0.680424 (-0.549518) | 0.014880 / 0.534201 (-0.519321) | 0.268836 / 0.579283 (-0.310447) | 0.268112 / 0.434364 (-0.166251) | 0.304300 / 0.540337 (-0.236038) | 0.440262 / 1.386936 (-0.946674) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005028 / 0.011353 (-0.006325) | 0.002937 / 0.011008 (-0.008071) | 0.049038 / 0.038508 (0.010530) | 0.057763 / 0.023109 (0.034653) | 0.273196 / 0.275898 (-0.002702) | 0.295519 / 0.323480 (-0.027961) | 0.004102 / 0.007986 (-0.003883) | 0.002487 / 0.004328 (-0.001841) | 0.049148 / 0.004250 (0.044898) | 0.040303 / 0.037052 (0.003251) | 0.279187 / 0.258489 (0.020698) | 0.311086 / 0.293841 (0.017245) | 0.024961 / 0.128546 (-0.103585) | 0.007264 / 0.075646 (-0.068382) | 0.055711 / 0.419271 (-0.363561) | 0.032355 / 0.043533 (-0.011178) | 0.274304 / 0.255139 (0.019165) | 0.290953 / 0.283200 (0.007753) | 0.018358 / 0.141683 (-0.123325) | 1.115984 / 1.452155 (-0.336170) | 1.190409 / 1.492716 (-0.302308) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095765 / 0.018006 (0.077759) | 0.287947 / 0.000490 (0.287457) | 0.000242 / 0.000200 (0.000042) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022165 / 0.037411 (-0.015246) | 0.070465 / 0.014526 (0.055940) | 0.082078 / 0.176557 (-0.094479) | 0.120209 / 0.737135 (-0.616926) | 0.084573 / 0.296338 (-0.211765) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298492 / 0.215209 (0.083283) | 2.924981 / 2.077655 (0.847327) | 1.597326 / 1.504120 (0.093206) | 1.459132 / 1.541195 (-0.082062) | 1.511471 / 1.468490 (0.042981) | 0.406671 / 4.584777 (-4.178106) | 2.443154 / 3.745712 (-1.302558) | 2.591131 / 5.269862 (-2.678731) | 1.549931 / 4.565676 (-3.015745) | 0.047042 / 0.424275 (-0.377234) | 0.004891 / 0.007607 (-0.002716) | 0.346274 / 0.226044 (0.120230) | 3.456050 / 2.268929 (1.187121) | 1.959328 / 55.444624 (-53.485296) | 1.647631 / 6.876477 (-5.228845) | 1.692024 / 2.142072 (-0.450049) | 0.478307 / 4.805227 (-4.326920) | 0.098738 / 6.500664 (-6.401926) | 0.041743 / 0.075469 (-0.033726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.984619 / 1.841788 (-0.857168) | 12.403984 / 8.074308 (4.329676) | 10.974347 / 10.191392 (0.782955) | 0.132893 / 0.680424 (-0.547530) | 0.015504 / 0.534201 (-0.518697) | 0.275354 / 0.579283 (-0.303929) | 0.283312 / 0.434364 (-0.151052) | 0.313661 / 0.540337 (-0.226677) | 0.419065 / 1.386936 (-0.967871) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65315e4a8308f04fcb025039afe2a2e43b5684e \"CML watermark\")\n" ]
"2023-11-14T10:47:09Z"
"2023-11-17T14:23:20Z"
"2023-11-17T14:17:00Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6414.diff", "html_url": "https://github.com/huggingface/datasets/pull/6414", "merged_at": "2023-11-17T14:17:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6414.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6414" }
Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782. **TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we set can `usedforsecurity=False` in any `hashlib` method which is mandatory for companies that forbid the use of `hashlib` for security purposes. This PR fixes that. **Note:** before merging this we need to release a new tokenizers version that would allow the newest `huggingface_hub` version (see https://github.com/huggingface/tokenizers/pull/1385). Otherwise it might create friction to users that want to install `datasets` + `tokenizers` at the same time.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6414/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6414/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2312/comments
https://api.github.com/repos/huggingface/datasets/issues/2312/events
https://github.com/huggingface/datasets/pull/2312
875,435,726
MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz
2,312
Add rename_columnS method
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[ "Merging then 😄 " ]
"2021-05-04T12:57:53Z"
"2021-05-04T13:43:13Z"
"2021-05-04T13:43:12Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2312.diff", "html_url": "https://github.com/huggingface/datasets/pull/2312", "merged_at": "2021-05-04T13:43:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/2312.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2312" }
Cherry-picked from #2255
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2312/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/464/comments
https://api.github.com/repos/huggingface/datasets/issues/464/events
https://github.com/huggingface/datasets/pull/464
669,767,381
MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz
464
Add rename, remove and cast in-place operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
"2020-07-31T12:30:21Z"
"2020-07-31T15:50:02Z"
"2020-07-31T15:50:00Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/464.diff", "html_url": "https://github.com/huggingface/datasets/pull/464", "merged_at": "2020-07-31T15:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/464" }
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows the new pattern with a trailing underscore indicating in-place methods.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/464/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4965/comments
https://api.github.com/repos/huggingface/datasets/issues/4965/events
https://github.com/huggingface/datasets/issues/4965
1,368,661,002
I_kwDODunzps5RlBwK
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
{ "avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4", "events_url": "https://api.github.com/users/hoangtnm/events{/privacy}", "followers_url": "https://api.github.com/users/hoangtnm/followers", "following_url": "https://api.github.com/users/hoangtnm/following{/other_user}", "gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hoangtnm", "id": 35718590, "login": "hoangtnm", "node_id": "MDQ6VXNlcjM1NzE4NTkw", "organizations_url": "https://api.github.com/users/hoangtnm/orgs", "received_events_url": "https://api.github.com/users/hoangtnm/received_events", "repos_url": "https://api.github.com/users/hoangtnm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions", "type": "User", "url": "https://api.github.com/users/hoangtnm" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.", "Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?", "Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac." ]
"2022-09-10T15:55:49Z"
"2023-07-21T14:45:50Z"
"2023-07-21T14:45:50Z"
NONE
null
null
null
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])}) dataset = dataset.cast_column("audio", Audio()) dataset[0] ``` ## Expected results ``` {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Actual results ````--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 dataset[0] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key) 2163 def __getitem__(self, key): # noqa: F811 2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2165 return self._getitem( 2166 key, 2167 ) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs) 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2150 formatted_output = format_table( 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2152 ) 2153 return formatted_output File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ -> 1647 return { 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ 1647 return { -> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id) 1257 # Object with special decoding: 1258 elif isinstance(schema, (Audio, Image)): 1259 # we pass the token to read and decode files from private repositories in streaming mode -> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None 1261 return obj File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id) 154 array, sampling_rate = self._decode_non_mp3_file_like(file) 155 else: --> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) 157 return {"path": path, "array": array, "sampling_rate": sampling_rate} File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id) 254 use_auth_token = None 256 with xopen(path, "rb", use_auth_token=use_auth_token) as f: --> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 258 return array, sampling_rate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 86 extra_args = len(args) - len(all_args) 87 if extra_args <= 0: ---> 88 return f(*args, **kwargs) 90 # extra_args > 0 91 args_msg = [ 92 "{}={}".format(name, arg) 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:]) 94 ] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type) 161 else: 162 # Otherwise try soundfile first, and then fall back if necessary 163 try: --> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype) 166 except RuntimeError as exc: 167 # If soundfile failed, try audioread instead 168 if isinstance(path, (str, pathlib.PurePath)): File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype) 192 context = path 193 else: 194 # Otherwise, create the soundfile object --> 195 context = sf.SoundFile(path) 197 with context as sf_desc: 198 sr_native = sf_desc.samplerate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 626 self._mode = mode 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) --> 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) 632 self.seek(0) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd) 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd) 1178 elif _has_virtual_io_attrs(file, mode_int): -> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file), 1180 mode_int, self._info, _ffi.NULL) 1181 else: 1182 raise TypeError("Invalid file: {0!r}".format(self.name)) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file) 1194 def _init_virtual_io(self, file): 1195 """Initialize callback functions for sf_open_virtual().""" 1196 @_ffi.callback("sf_vio_get_filelen") -> 1197 def vio_get_filelen(user_data): 1198 curr = file.tell() 1199 file.seek(0, SEEK_END) MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4965/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4403/comments
https://api.github.com/repos/huggingface/datasets/issues/4403/events
https://github.com/huggingface/datasets/pull/4403
1,248,390,134
PR_kwDODunzps44dcpl
4,403
Uncomment logging deactivation for ArrowBasedBuilder
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-05-25T16:46:15Z"
"2022-05-31T08:33:36Z"
"2022-05-31T08:25:02Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4403.diff", "html_url": "https://github.com/huggingface/datasets/pull/4403", "merged_at": "2022-05-31T08:25:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/4403.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4403" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4403/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2692/comments
https://api.github.com/repos/huggingface/datasets/issues/2692/events
https://github.com/huggingface/datasets/pull/2692
949,765,484
MDExOlB1bGxSZXF1ZXN0Njk0NDE4MDg1
2,692
Update BibTeX entry
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-07-21T14:23:35Z"
"2021-07-21T15:31:41Z"
"2021-07-21T15:31:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2692.diff", "html_url": "https://github.com/huggingface/datasets/pull/2692", "merged_at": "2021-07-21T15:31:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2692.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2692" }
Update BibTeX entry
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2692/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2692/timeline
null
null
true